Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR] improvement(server) Add context to rpc audit log to output necessary context #2088

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Revert print partitionsList as it could be many partition

5681cf6
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Open

[MINOR] improvement(server) Add context to rpc audit log to output necessary context #2088

Revert print partitionsList as it could be many partition
5681cf6
Select commit
Loading
Failed to load commit list.
GitHub Actions / Test Results failed Sep 29, 2024 in 0s

5 errors, 2 skipped, 1 020 pass in 5h 32m 0s

 2 840 files   2 840 suites   5h 32m 0s ⏱️
 1 027 tests  1 020 ✅  2 💤 0 ❌ 5 🔥
12 847 runs  12 811 ✅ 30 💤 0 ❌ 6 🔥

Results for commit 5681cf6.

Annotations

Check failure on line 0 in org.apache.uniffle.test.RepartitionWithMemoryHybridStorageRssTest

See this annotation in the file changed.

@github-actions github-actions / Test Results

1 out of 10 runs with error: resultCompareTest (org.apache.uniffle.test.RepartitionWithMemoryHybridStorageRssTest)

artifacts/integration-reports-spark3.2.0/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithMemoryHybridStorageRssTest.xml [took 13s]
Raw output
Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 5) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
 at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
 at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
 at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
 at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
 at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
 at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
 at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
 at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
 at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
 at org.apache.spark.scheduler.Task.run(Task.scala:131)
 at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 5) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2403)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2351)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2351)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:304)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:171)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:151)
	at org.apache.spark.rdd.OrderedRDDFunctions.$anonfun$sortByKey$1(OrderedRDDFunctions.scala:64)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:927)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:897)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:887)
	at org.apache.uniffle.test.RepartitionTest.repartitionApp(RepartitionTest.java:99)
	at org.apache.uniffle.test.RepartitionTest.runTest(RepartitionTest.java:49)
	at org.apache.uniffle.test.SparkIntegrationTestBase.runSparkApp(SparkIntegrationTestBase.java:102)
	at org.apache.uniffle.test.SparkIntegrationTestBase.run(SparkIntegrationTestBase.java:68)
	at org.apache.uniffle.test.RepartitionTest.resultCompareTest(RepartitionTest.java:44)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:32:50.289] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
Formatting using clusterid: testClusterID
[2024-09-29 11:32:50.290] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:32:50.290] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:32:50.290] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:32:50.290] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:32:50.291] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:32:50.291] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:32:50.291] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:32:50.291] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:32:50
[2024-09-29 11:32:50.291] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:32:50.291] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.291] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:32:50.291] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:32:50.293] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:32:50.293] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:32:50.293] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:32:50.294] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:32:50.294] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:32:50.294] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:32:50.294] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:32:50.294] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:32:50.294] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:32:50.294] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:32:50.294] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:32:50.294] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:32:50.294] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:32:50.295] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:32:50.295] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.295] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:32:50.295] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:32:50.297] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:32:50.297] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:32:50.297] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:32:50.297] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:32:50.297] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.297] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:32:50.297] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:32:50.298] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:32:50.298] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:32:50.298] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:32:50.298] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:32:50.298] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:32:50.298] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:32:50.298] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:32:50.298] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:32:50.299] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:32:50.299] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.299] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:32:50.299] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:32:50.299] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-1849253376-127.0.0.1-1727609570299
[2024-09-29 11:32:50.302] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name1 has been successfully formatted.
[2024-09-29 11:32:50.303] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name2 has been successfully formatted.
[2024-09-29 11:32:50.304] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:32:50.304] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:32:50.309] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:32:50.312] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4955493310008628800/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:32:50.313] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2024-09-29 11:32:50.314] [main] [INFO] NameNode.createNameNode - createNameNode []
[2024-09-29 11:32:50.315] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2024-09-29 11:32:50.316] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2024-09-29 11:32:50.316] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2024-09-29 11:32:50.321] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2024-09-29 11:32:50.323] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5554cc4] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2024-09-29 11:32:50.323] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2024-09-29 11:32:50.324] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2024-09-29 11:32:50.325] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2024-09-29 11:32:50.325] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2024-09-29 11:32:50.325] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2024-09-29 11:32:50.325] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2024-09-29 11:32:50.326] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2024-09-29 11:32:50.326] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2024-09-29 11:32:50.326] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 32825
[2024-09-29 11:32:50.326] [main] [INFO] log.info - jetty-6.1.26
[2024-09-29 11:32:50.330] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/Jetty_localhost_32825_hdfs____f4hgti/webapp
[2024-09-29 11:32:50.393] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32825
[2024-09-29 11:32:50.394] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:32:50.394] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:32:50.394] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:32:50.395] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:32:50.396] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:32:50.396] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:32:50.396] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:32:50.396] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:32:50
[2024-09-29 11:32:50.396] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:32:50.396] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.397] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:32:50.397] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:32:50.398] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:32:50.398] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:32:50.399] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:32:50.399] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:32:50.399] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:32:50.399] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:32:50.399] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:32:50.399] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:32:50.400] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:32:50.400] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.400] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:32:50.400] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:32:50.401] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:32:50.401] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:32:50.402] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:32:50.402] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:32:50.402] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:32:50.402] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:32:50.402] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:32:50.403] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:32:50.403] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:32:50.403] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:32:50.403] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:32:50.403] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:32:50.403] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:32:50.403] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:32:50.403] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:32:50.403] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:32:50.403] [main] [INFO] GSet.computeCapacity…63, partitionIdsSize=5}	return{requireBufferId=15}
[2024-09-29 11:33:06.421] [nioEventLoopGroup-196-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41658	executionTimeUs=1495	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=14, requireSize=4543717, isPreAllocated=true, requireBlocksSize=2271786, stageAttemptNumber=0}
[2024-09-29 11:33:06.421] [nioEventLoopGroup-196-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41658	executionTimeUs=285	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=15, requireSize=618163, isPreAllocated=true, requireBlocksSize=308857, stageAttemptNumber=0}
[2024-09-29 11:33:06.421] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.writeImpl - Finish write shuffle for appId[local-1727609585090_1727609585064], shuffleId[0], taskId[0_0] with write 945 ms, include checkSendResult[3], commit[0], WriteBufferManager cost copyTime[2], writeTime[941], serializeTime[784], compressTime[56], estimateTime[0], requireMemoryTime[0], uncompressedDataLen[21567207]
[2024-09-29 11:33:06.422] [Grpc-0] [INFO] ShuffleServerGrpcService.reportShuffleResult - Accepted blockIds report for 10 blocks across 5 partitions as shuffle result for task appId[local-1727609585090_1727609585064], shuffleId[0], taskAttemptId[0]
[2024-09-29 11:33:06.422] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=183	appId=local-1727609585090_1727609585064	shuffleId=0	args{taskAttemptId=0, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=10, expectedBlockCount=10}
[2024-09-29 11:33:06.422] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]} for appId[local-1727609585090_1727609585064], shuffleId[0] successfully
[2024-09-29 11:33:06.422] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[0] with bitmapNum[1] cost 1 ms
[2024-09-29 11:33:06.423] [Executor task launch worker for task 0.0 in stage 0.0 (TID 0)] [INFO] Executor.logInfo - Finished task 0.0 in stage 0.0 (TID 0). 1624 bytes result sent to driver
[2024-09-29 11:33:06.423] [task-result-getter-2] [INFO] TaskSetManager.logInfo - Finished task 0.0 in stage 0.0 (TID 0) in 1247 ms on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net (executor driver) (3/4)
[2024-09-29 11:33:06.427] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=67	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireSize=4543091, partitionIdsSize=1}	return{requireBufferId=16}
[2024-09-29 11:33:06.431] [nioEventLoopGroup-196-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41658	executionTimeUs=1486	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=16, requireSize=4543091, isPreAllocated=true, requireBlocksSize=2271473, stageAttemptNumber=0}
[2024-09-29 11:33:06.438] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=46	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireSize=4542375, partitionIdsSize=1}	return{requireBufferId=17}
[2024-09-29 11:33:06.443] [nioEventLoopGroup-196-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41674	executionTimeUs=1311	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=17, requireSize=4542375, isPreAllocated=true, requireBlocksSize=2271115, stageAttemptNumber=0}
[2024-09-29 11:33:06.449] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=57	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireSize=4542959, partitionIdsSize=1}	return{requireBufferId=18}
[2024-09-29 11:33:06.452] [nioEventLoopGroup-196-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41658	executionTimeUs=817	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=18, requireSize=4542959, isPreAllocated=true, requireBlocksSize=2271407, stageAttemptNumber=0}
[2024-09-29 11:33:06.468] [Executor task launch worker for task 1.0 in stage 0.0 (TID 1)] [INFO] WriteBufferManager.clear - Flush total buffer for shuffleId[0] with allocated[11272192], dataSize[583213], memoryUsed[1310720], number of blocks[5], flush ratio[1.0]
[2024-09-29 11:33:06.468] [Grpc-8] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=51	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireSize=605577, partitionIdsSize=5}	return{requireBufferId=19}
[2024-09-29 11:33:06.469] [nioEventLoopGroup-196-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:41658	executionTimeUs=232	appId=local-1727609585090_1727609585064	shuffleId=0	args{requireBufferId=19, requireSize=605577, isPreAllocated=true, requireBlocksSize=302564, stageAttemptNumber=0}
[2024-09-29 11:33:06.469] [Executor task launch worker for task 1.0 in stage 0.0 (TID 1)] [INFO] RssShuffleWriter.writeImpl - Finish write shuffle for appId[local-1727609585090_1727609585064], shuffleId[0], taskId[1_0] with write 1004 ms, include checkSendResult[1], commit[0], WriteBufferManager cost copyTime[3], writeTime[1001], serializeTime[849], compressTime[48], estimateTime[0], requireMemoryTime[0], uncompressedDataLen[21554261]
[2024-09-29 11:33:06.469] [Grpc-0] [INFO] ShuffleServerGrpcService.reportShuffleResult - Accepted blockIds report for 10 blocks across 5 partitions as shuffle result for task appId[local-1727609585090_1727609585064], shuffleId[0], taskAttemptId[4]
[2024-09-29 11:33:06.470] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=142	appId=local-1727609585090_1727609585064	shuffleId=0	args{taskAttemptId=4, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=10, expectedBlockCount=10}
[2024-09-29 11:33:06.470] [Executor task launch worker for task 1.0 in stage 0.0 (TID 1)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]} for appId[local-1727609585090_1727609585064], shuffleId[0] successfully
[2024-09-29 11:33:06.470] [Executor task launch worker for task 1.0 in stage 0.0 (TID 1)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[4] with bitmapNum[1] cost 1 ms
[2024-09-29 11:33:06.470] [Executor task launch worker for task 1.0 in stage 0.0 (TID 1)] [INFO] Executor.logInfo - Finished task 1.0 in stage 0.0 (TID 1). 1624 bytes result sent to driver
[2024-09-29 11:33:06.471] [task-result-getter-3] [INFO] TaskSetManager.logInfo - Finished task 1.0 in stage 0.0 (TID 1) in 1294 ms on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net (executor driver) (4/4)
[2024-09-29 11:33:06.471] [task-result-getter-3] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 0.0, whose tasks have all completed, from pool 
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 0 (repartition at RepartitionTest.java:97) finished in 1.303 s
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - looking for newly runnable stages
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - running: Set()
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - waiting: Set(ShuffleMapStage 1, ResultStage 2)
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - failed: Set()
[2024-09-29 11:33:06.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ShuffleMapStage 1 (MapPartitionsRDD[10] at repartition at RepartitionTest.java:97), which has no missing parents
[2024-09-29 11:33:06.472] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_4 stored as values in memory (estimated size 7.3 KiB, free 2.5 GiB)
[2024-09-29 11:33:06.473] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 2.5 GiB)
[2024-09-29 11:33:06.473] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_4_piece0 in memory on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net:35515 (size: 4.1 KiB, free: 2.5 GiB)
[2024-09-29 11:33:06.474] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 4 from broadcast at DAGScheduler.scala:1427
[2024-09-29 11:33:06.474] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 5 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[10] at repartition at RepartitionTest.java:97) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
[2024-09-29 11:33:06.474] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 1.0 with 5 tasks resource profile 0
[2024-09-29 11:33:06.474] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 1.0 (TID 4) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 0, ANY, 4536 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:06.474] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 1.0 in stage 1.0 (TID 5) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 1, ANY, 4536 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:06.475] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 2.0 in stage 1.0 (TID 6) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 2, ANY, 4536 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:06.475] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 3.0 in stage 1.0 (TID 7) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 3, ANY, 4536 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:06.475] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] Executor.logInfo - Running task 0.0 in stage 1.0 (TID 4)
[2024-09-29 11:33:06.475] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] Executor.logInfo - Running task 1.0 in stage 1.0 (TID 5)
[2024-09-29 11:33:06.475] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] Executor.logInfo - Running task 2.0 in stage 1.0 (TID 6)
[2024-09-29 11:33:06.476] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] Executor.logInfo - Running task 3.0 in stage 1.0 (TID 7)
[2024-09-29 11:33:06.477] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[0] data with RssHandle[appId local-1727609585090_1727609585064, shuffleId 1].
[2024-09-29 11:33:06.477] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[8] data with RssHandle[appId local-1727609585090_1727609585064, shuffleId 1].
[2024-09-29 11:33:06.477] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[12] data with RssHandle[appId local-1727609585090_1727609585064, shuffleId 1].
[2024-09-29 11:33:06.477] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[4] data with RssHandle[appId local-1727609585090_1727609585064, shuffleId 1].
[2024-09-29 11:33:06.478] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[0, 1]
[2024-09-29 11:33:06.478] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[2, 3]
[2024-09-29 11:33:06.479] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[1, 2]
[2024-09-29 11:33:06.480] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=189	appId=local-1727609585090_1727609585064	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:33:06.480] [Grpc-5] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=189	appId=local-1727609585090_1727609585064	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:33:06.480] [Grpc-6] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609585090_1727609585064], shuffleId[0], partitions[1]
java.lang.ArrayIndexOutOfBoundsException: 1
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:06.480] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.5:34804	executionTimeUs=492	appId=local-1727609585090_1727609585064	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:33:06.480] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[3, 4]
[2024-09-29 11:33:06.480] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 2 ms, and get 7 blockIds for shuffleId[0], startPartition[0], endPartition[1]
[2024-09-29 11:33:06.480] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 2 ms, and get 7 blockIds for shuffleId[0], startPartition[2], endPartition[3]
[2024-09-29 11:33:06.480] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:40315/rss/test,empty conf
[2024-09-29 11:33:06.480] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:40315/rss/test,empty conf
[2024-09-29 11:33:06.481] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.5:20013 for [appId=local-1727609585090_1727609585064, shuffleId=0, errorMsg:1
[2024-09-29 11:33:06.482] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=113	appId=local-1727609585090_1727609585064	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:33:06.482] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]} for appId[local-1727609585090_1727609585064], shuffleId[0], requestPartitions[1]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.5:20013 for [appId=local-1727609585090_1727609585064, shuffleId=0, errorMsg:1
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:06.482] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 2 ms, and get 7 blockIds for shuffleId[0], startPartition[3], endPartition[4]
[2024-09-29 11:33:06.482] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:40315/rss/test,empty conf
[2024-09-29 11:33:06.482] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=0, inventory={0={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20013], netty port[21005]}]}}, succeedList={}}
[2024-09-29 11:33:06.482] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleManager.markFailedTask - Mark the task: 5_0 failed.
[2024-09-29 11:33:06.483] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [ERROR] Executor.logError - Exception in task 1.0 in stage 1.0 (TID 5)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:06.485] [dispatcher-event-loop-1] [INFO] TaskSetManager.logInfo - Starting task 4.0 in stage 1.0 (TID 8) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 4, ANY, 4536 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:06.485] [task-result-getter-0] [WARN] TaskSetManager.logWarning - Lost task 1.0 in stage 1.0 (TID 5) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

[2024-09-29 11:33:06.485] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] Executor.logInfo - Running task 4.0 in stage 1.0 (TID 8)
[2024-09-29 11:33:06.486] [task-result-getter-0] [ERROR] TaskSetManager.logError - Task 1 in stage 1.0 failed 1 times; aborting job
[2024-09-29 11:33:06.486] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[16] data with RssHandle[appId local-1727609585090_1727609585064, shuffleId 1].
[2024-09-29 11:33:06.488] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 1
[2024-09-29 11:33:06.488] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 1: Stage cancelled
[2024-09-29 11:33:06.489] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[4, 5]
[2024-09-29 11:33:06.489] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:34804	executionTimeUs=138	appId=local-1727609585090_1727609585064	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:33:06.490] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 7 blockIds for shuffleId[0], startPartition[4], endPartition[5]
[2024-09-29 11:33:06.490] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:40315/rss/test,empty conf
[2024-09-29 11:33:06.491] [dispatcher-event-loop-0] [INFO] Executor.logInfo - Executor is trying to kill task 2.0 in stage 1.0 (TID 6), reason: Stage cancelled
[2024-09-29 11:33:06.491] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 3.0 in stage 1.0 (TID 7), reason: Stage cancelled
[2024-09-29 11:33:06.491] [dispatcher-event-loop-1] [INFO] Executor.logInfo - Executor is trying to kill task 0.0 in stage 1.0 (TID 4), reason: Stage cancelled
[2024-09-29 11:33:06.491] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 4.0 in stage 1.0 (TID 8), reason: Stage cancelled
[2024-09-29 11:33:06.492] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Stage 1 was cancelled
[2024-09-29 11:33:06.493] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 1 (repartition at RepartitionTest.java:97) failed in 0.020 s due to Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 5) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609585090_1727609585064], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:33:06.494] [main] [INFO] DAGScheduler.logInfo - Job 0 failed: sortByKey at RepartitionTest.java:99, took 1.333738 s

Check failure on line 0 in org.apache.uniffle.test.RepartitionWithMemoryRssTest

See this annotation in the file changed.

@github-actions github-actions / Test Results

1 out of 10 runs with error: resultCompareTest (org.apache.uniffle.test.RepartitionWithMemoryRssTest)

artifacts/integration-reports-spark3.2.0/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithMemoryRssTest.xml [took 16s]
Raw output
Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 21) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
 at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
 at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
 at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
 at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
 at org.apache.spark.scheduler.Task.run(Task.scala:131)
 at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 21) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2403)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2351)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2351)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
	at org.apache.spark.rdd.PairRDDFunctions.$anonfun$collectAsMap$1(PairRDDFunctions.scala:737)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:736)
	at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:663)
	at org.apache.uniffle.test.RepartitionTest.repartitionApp(RepartitionTest.java:99)
	at org.apache.uniffle.test.RepartitionTest.runTest(RepartitionTest.java:49)
	at org.apache.uniffle.test.SparkIntegrationTestBase.runSparkApp(SparkIntegrationTestBase.java:102)
	at org.apache.uniffle.test.SparkIntegrationTestBase.run(SparkIntegrationTestBase.java:68)
	at org.apache.uniffle.test.RepartitionTest.resultCompareTest(RepartitionTest.java:44)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:06.910] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
Formatting using clusterid: testClusterID
[2024-09-29 11:33:06.911] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:33:06.912] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:33:06.912] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:33:06.912] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:33:06.913] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:33:06.913] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:33:06.913] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:33:06.913] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:33:06
[2024-09-29 11:33:06.913] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:33:06.914] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:06.914] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:33:06.914] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:33:06.915] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:33:06.916] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:33:06.917] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:33:06.917] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:33:06.917] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:33:06.917] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:33:06.917] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:33:06.917] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:33:06.918] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:33:06.918] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:06.918] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:33:06.918] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:33:06.919] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:33:06.919] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:33:06.919] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:33:06.919] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:33:06.919] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:06.920] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:33:06.920] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:33:06.920] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:33:06.920] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:33:06.921] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:33:06.921] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:33:06.921] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:33:06.921] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:33:06.921] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:33:06.921] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:33:06.921] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:33:06.922] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:06.922] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:33:06.922] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:33:06.922] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-1687994710-127.0.0.1-1727609586922
[2024-09-29 11:33:06.925] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1 has been successfully formatted.
[2024-09-29 11:33:06.927] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2 has been successfully formatted.
[2024-09-29 11:33:06.927] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:33:06.929] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:33:06.935] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2/current/fsimage.ckpt_0000000000000000000 of size 322 bytes saved in 0 seconds.
[2024-09-29 11:33:06.937] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/current/fsimage.ckpt_0000000000000000000 of size 322 bytes saved in 0 seconds.
[2024-09-29 11:33:06.938] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2024-09-29 11:33:06.939] [main] [INFO] NameNode.createNameNode - createNameNode []
[2024-09-29 11:33:06.940] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2024-09-29 11:33:06.941] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2024-09-29 11:33:06.941] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2024-09-29 11:33:06.942] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2024-09-29 11:33:06.945] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@28aedf6e] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2024-09-29 11:33:06.945] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2024-09-29 11:33:06.946] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2024-09-29 11:33:06.946] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2024-09-29 11:33:06.947] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2024-09-29 11:33:06.947] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2024-09-29 11:33:06.947] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2024-09-29 11:33:06.947] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2024-09-29 11:33:06.948] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2024-09-29 11:33:06.948] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 41043
[2024-09-29 11:33:06.948] [main] [INFO] log.info - jetty-6.1.26
[2024-09-29 11:33:06.951] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/Jetty_localhost_41043_hdfs____f1l96k/webapp
[2024-09-29 11:33:07.015] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41043
[2024-09-29 11:33:07.016] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:33:07.016] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:33:07.016] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:33:07.016] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:33:07.016] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:33:07.016] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:33:07.016] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:33:07.017] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:33:07
[2024-09-29 11:33:07.017] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:33:07.017] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:07.017] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:33:07.017] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:33:07.019] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:33:07.020] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:33:07.020] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:33:07.020] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:33:07.020] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:33:07.020] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:33:07.020] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:33:07.020] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:33:07.021] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:33:07.021] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:07.021] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:33:07.021] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:33:07.023] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:33:07.023] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:33:07.023] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:33:07.023] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:33:07.023] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:07.023] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:33:07.024] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:33:07.024] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:33:07.024] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:33:07.024] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:33:07.024] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:33:07.024] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:33:07.025] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:33:07.025] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:33:07.025] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:33:07.025] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:33:07.025] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:33:07.025] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:33:07.025] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:33:07.027] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/in_use.lock acquired by nodename 11838@action-host
[2024-09-29 11:33:07.027] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2/in_use.lock acquired by nodename 11838@action-host
[2024-09-29 11:33:07.028] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/current
[2024-09-29 11:33:07.028] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name2/current
[2024-09-29 11:33:07.028] [main] [INFO] FSImage.loadFSImage - No edit log streams selected.
[2024-09-29 11:33:07.029] [main] [INFO] FSImage.loadFSImageFile - Planning to load image: FSImageFile(file=/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
[2024-09-29 11:33:07.029] [main] [INFO] FSImageFormatPBINode.loadINodeSection - Loading 1 INodes.
[2024-09-29 11:33:07.029] [main] [INFO] FSImageFormatProtobuf.load - Loaded FSImage in 0 seconds.
[2024-09-29 11:33:07.029] [main] [INFO] FSImage.loadFSImage - Loaded image for txid 0 from /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit4459524610964316179/name1/current/fsimage_0000000000000000000
[2024-09-29 11:33:07.030] [main] [INFO] FSNamesystem.loadFSImage - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
[2024-09-29 11:33:07.030] [main] [INFO] FSEditLog.startLogSegment - Starting log segment at 1
[2024-09-29 11:33:07.036] [main] [INFO] NameCache.initialized - initialized with 0 entries 0 lookups
[2024-09-29 11:33:07.0…d started:appId=local-1727609600949_1727609600899, shuffleId=1,taskId=18_0, partitions: [4, 5), maps: [0, 2147483647)
[2024-09-29 11:33:25.394] [nioEventLoopGroup-219-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getMemoryShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42970	executionTimeUs=558	appId=local-1727609600949_1727609600899	shuffleId=1	args{requestId=503, partitionId=4, blockId=-1, readBufferSize=1048576}	return{len=1132697, bufferSegments=2}
[2024-09-29 11:33:25.394] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleServerGrpcNettyClient.getInMemoryShuffleData - GetInMemoryShuffleData size:1132697(bytes) from 10.1.0.5:21006 for appId[local-1727609600949_1727609600899], shuffleId[1], partitionId[4], lastBlockId[-1] cost:1(ms)
[2024-09-29 11:33:25.482] [nioEventLoopGroup-219-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getMemoryShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42986	executionTimeUs=537	appId=local-1727609600949_1727609600899	shuffleId=1	args{requestId=504, partitionId=0, blockId=4, readBufferSize=1048576}	return{len=1037809, bufferSegments=1}
[2024-09-29 11:33:25.482] [nioEventLoopGroup-219-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getMemoryShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42970	executionTimeUs=567	appId=local-1727609600949_1727609600899	shuffleId=1	args{requestId=505, partitionId=4, blockId=16777216, readBufferSize=1048576}	return{len=1133764, bufferSegments=2}
[2024-09-29 11:33:25.482] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ShuffleServerGrpcNettyClient.getInMemoryShuffleData - GetInMemoryShuffleData size:1037809(bytes) from 10.1.0.5:21006 for appId[local-1727609600949_1727609600899], shuffleId[1], partitionId[0], lastBlockId[4] cost:1(ms)
[2024-09-29 11:33:25.483] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleServerGrpcNettyClient.getInMemoryShuffleData - GetInMemoryShuffleData size:1133764(bytes) from 10.1.0.5:21006 for appId[local-1727609600949_1727609600899], shuffleId[1], partitionId[4], lastBlockId[16777216] cost:2(ms)
[2024-09-29 11:33:25.566] [nioEventLoopGroup-219-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getMemoryShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42970	executionTimeUs=375	appId=local-1727609600949_1727609600899	shuffleId=1	args{requestId=506, partitionId=4, blockId=16777220, readBufferSize=1048576}	return{len=566047, bufferSegments=1}
[2024-09-29 11:33:25.566] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ShuffleReadClientImpl.logStatics - Metrics for shuffleId[1], partitionId[0], read data cost 5 ms, copy data cost 0 ms, crc check cost 1 ms
[2024-09-29 11:33:25.566] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleServerGrpcNettyClient.getInMemoryShuffleData - GetInMemoryShuffleData size:566047(bytes) from 10.1.0.5:21006 for appId[local-1727609600949_1727609600899], shuffleId[1], partitionId[4], lastBlockId[16777220] cost:1(ms)
[2024-09-29 11:33:25.566] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 5 blocks from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:5 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.566] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 5193625 bytes from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:5193625 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.566] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 14008420 uncompressed bytes from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:14008420 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.566] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] RssShuffleDataIterator.hasNext - Fetch 5193625 bytes cost 9 ms and 2 ms to serialize, 8 ms to decompress with unCompressionLength[14008420]
[2024-09-29 11:33:25.567] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] WriteBufferManager.clear - Flush total buffer for shuffleId[2] with allocated[16777216], dataSize[738], memoryUsed[1310720], number of blocks[5], flush ratio[1.0]
[2024-09-29 11:33:25.568] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=61	appId=local-1727609600949_1727609600899	shuffleId=2	args{requireSize=1985, partitionIdsSize=5}	return{requireBufferId=28}
[2024-09-29 11:33:25.568] [nioEventLoopGroup-219-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42970	executionTimeUs=113	appId=local-1727609600949_1727609600899	shuffleId=2	args{requireBufferId=28, requireSize=1985, isPreAllocated=true, requireBlocksSize=768, stageAttemptNumber=0}
[2024-09-29 11:33:25.568] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] RssShuffleWriter.writeImpl - Finish write shuffle for appId[local-1727609600949_1727609600899], shuffleId[2], taskId[14_0] with write 1 ms, include checkSendResult[1], commit[0], WriteBufferManager cost copyTime[0], writeTime[0], serializeTime[0], compressTime[0], estimateTime[0], requireMemoryTime[0], uncompressedDataLen[738]
[2024-09-29 11:33:25.569] [Grpc-2] [INFO] ShuffleServerGrpcService.reportShuffleResult - Accepted blockIds report for 5 blocks across 5 partitions as shuffle result for task appId[local-1727609600949_1727609600899], shuffleId[2], taskAttemptId[0]
[2024-09-29 11:33:25.569] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=149	appId=local-1727609600949_1727609600899	shuffleId=2	args{taskAttemptId=0, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=5, expectedBlockCount=5}
[2024-09-29 11:33:25.569] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]} for appId[local-1727609600949_1727609600899], shuffleId[2] successfully
[2024-09-29 11:33:25.569] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[0] with bitmapNum[1] cost 1 ms
[2024-09-29 11:33:25.569] [Executor task launch worker for task 0.0 in stage 5.0 (TID 14)] [INFO] Executor.logInfo - Finished task 0.0 in stage 5.0 (TID 14). 1380 bytes result sent to driver
[2024-09-29 11:33:25.570] [task-result-getter-1] [INFO] TaskSetManager.logInfo - Finished task 0.0 in stage 5.0 (TID 14) in 701 ms on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net (executor driver) (4/5)
[2024-09-29 11:33:25.597] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Removed broadcast_5_piece0 on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net:43265 in memory (size: 4.5 KiB, free: 2.5 GiB)
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleReadClientImpl.logStatics - Metrics for shuffleId[1], partitionId[4], read data cost 4 ms, copy data cost 0 ms, crc check cost 1 ms
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 5 blocks from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:5 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 2832508 bytes from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:2832508 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 7834244 uncompressed bytes from [ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}], Consumed[ hot:7834244 warm:0 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] RssShuffleDataIterator.hasNext - Fetch 2832508 bytes cost 5 ms and 1 ms to serialize, 2 ms to decompress with unCompressionLength[7834244]
[2024-09-29 11:33:25.628] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] WriteBufferManager.clear - Flush total buffer for shuffleId[2] with allocated[16777216], dataSize[495], memoryUsed[1310720], number of blocks[5], flush ratio[1.0]
[2024-09-29 11:33:25.629] [Grpc-3] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=53	appId=local-1727609600949_1727609600899	shuffleId=2	args{requireSize=1611, partitionIdsSize=5}	return{requireBufferId=29}
[2024-09-29 11:33:25.629] [nioEventLoopGroup-219-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.5:42970	executionTimeUs=117	appId=local-1727609600949_1727609600899	shuffleId=2	args{requireBufferId=29, requireSize=1611, isPreAllocated=true, requireBlocksSize=581, stageAttemptNumber=0}
[2024-09-29 11:33:25.629] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] RssShuffleWriter.writeImpl - Finish write shuffle for appId[local-1727609600949_1727609600899], shuffleId[2], taskId[18_0] with write 1 ms, include checkSendResult[1], commit[0], WriteBufferManager cost copyTime[0], writeTime[0], serializeTime[0], compressTime[0], estimateTime[0], requireMemoryTime[0], uncompressedDataLen[495]
[2024-09-29 11:33:25.630] [Grpc-6] [INFO] ShuffleServerGrpcService.reportShuffleResult - Accepted blockIds report for 5 blocks across 5 partitions as shuffle result for task appId[local-1727609600949_1727609600899], shuffleId[2], taskAttemptId[16]
[2024-09-29 11:33:25.630] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=137	appId=local-1727609600949_1727609600899	shuffleId=2	args{taskAttemptId=16, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=5, expectedBlockCount=5}
[2024-09-29 11:33:25.630] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]} for appId[local-1727609600949_1727609600899], shuffleId[2] successfully
[2024-09-29 11:33:25.630] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[16] with bitmapNum[1] cost 1 ms
[2024-09-29 11:33:25.630] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] Executor.logInfo - Finished task 4.0 in stage 5.0 (TID 18). 1423 bytes result sent to driver
[2024-09-29 11:33:25.631] [task-result-getter-2] [INFO] TaskSetManager.logInfo - Finished task 4.0 in stage 5.0 (TID 18) in 283 ms on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net (executor driver) (5/5)
[2024-09-29 11:33:25.631] [task-result-getter-2] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 5.0, whose tasks have all completed, from pool 
[2024-09-29 11:33:25.631] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 5 (reduceByKey at RepartitionTest.java:98) finished in 0.764 s
[2024-09-29 11:33:25.631] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - looking for newly runnable stages
[2024-09-29 11:33:25.631] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - running: Set()
[2024-09-29 11:33:25.631] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - waiting: Set(ResultStage 6)
[2024-09-29 11:33:25.631] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - failed: Set()
[2024-09-29 11:33:25.632] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ResultStage 6 (ShuffledRDD[14] at sortByKey at RepartitionTest.java:99), which has no missing parents
[2024-09-29 11:33:25.632] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_8 stored as values in memory (estimated size 6.2 KiB, free 2.5 GiB)
[2024-09-29 11:33:25.633] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_8_piece0 stored as bytes in memory (estimated size 3.7 KiB, free 2.5 GiB)
[2024-09-29 11:33:25.633] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_8_piece0 in memory on fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net:43265 (size: 3.7 KiB, free: 2.5 GiB)
[2024-09-29 11:33:25.634] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 8 from broadcast at DAGScheduler.scala:1427
[2024-09-29 11:33:25.634] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 5 missing tasks from ResultStage 6 (ShuffledRDD[14] at sortByKey at RepartitionTest.java:99) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
[2024-09-29 11:33:25.634] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 6.0 with 5 tasks resource profile 0
[2024-09-29 11:33:25.634] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 6.0 (TID 19) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 0, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:25.634] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 1.0 in stage 6.0 (TID 20) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 1, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:25.635] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 2.0 in stage 6.0 (TID 21) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 2, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:25.635] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 3.0 in stage 6.0 (TID 22) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 3, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:25.635] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] Executor.logInfo - Running task 1.0 in stage 6.0 (TID 20)
[2024-09-29 11:33:25.635] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] Executor.logInfo - Running task 2.0 in stage 6.0 (TID 21)
[2024-09-29 11:33:25.635] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] Executor.logInfo - Running task 0.0 in stage 6.0 (TID 19)
[2024-09-29 11:33:25.636] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[1, 2]
[2024-09-29 11:33:25.636] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[2, 3]
[2024-09-29 11:33:25.636] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[0, 1]
[2024-09-29 11:33:25.636] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] Executor.logInfo - Running task 3.0 in stage 6.0 (TID 22)
[2024-09-29 11:33:25.636] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=144	appId=local-1727609600949_1727609600899	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:33:25.637] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=167	appId=local-1727609600949_1727609600899	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:33:25.636] [Grpc-8] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609600949_1727609600899], shuffleId[2], partitions[2]
java.lang.ArrayIndexOutOfBoundsException: 0
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:25.637] [Grpc-8] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.5:47630	executionTimeUs=375	appId=local-1727609600949_1727609600899	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:33:25.637] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[3, 4]
[2024-09-29 11:33:25.637] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 5 blockIds for shuffleId[2], startPartition[0], endPartition[1]
[2024-09-29 11:33:25.637] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 5 blockIds for shuffleId[2], startPartition[1], endPartition[2]
[2024-09-29 11:33:25.637] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:33:25.637] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:33:25.639] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=93	appId=local-1727609600949_1727609600899	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:33:25.640] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.5:20015 for [appId=local-1727609600949_1727609600899, shuffleId=2, errorMsg:0
[2024-09-29 11:33:25.641] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 4 ms, and get 5 blockIds for shuffleId[2], startPartition[3], endPartition[4]
[2024-09-29 11:33:25.641] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]} for appId[local-1727609600949_1727609600899], shuffleId[2], requestPartitions[2]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.5:20015 for [appId=local-1727609600949_1727609600899, shuffleId=2, errorMsg:0
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:25.641] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:33:25.641] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=2, inventory={0={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.5], grpc port[20015], netty port[21006]}]}}, succeedList={}}
[2024-09-29 11:33:25.641] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [ERROR] Executor.logError - Exception in task 2.0 in stage 6.0 (TID 21)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:25.642] [dispatcher-event-loop-1] [INFO] TaskSetManager.logInfo - Starting task 4.0 in stage 6.0 (TID 23) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net, executor driver, partition 4, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:33:25.642] [task-result-getter-3] [WARN] TaskSetManager.logWarning - Lost task 2.0 in stage 6.0 (TID 21) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

[2024-09-29 11:33:25.642] [task-result-getter-3] [ERROR] TaskSetManager.logError - Task 2 in stage 6.0 failed 1 times; aborting job
[2024-09-29 11:33:25.643] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] Executor.logInfo - Running task 4.0 in stage 6.0 (TID 23)
[2024-09-29 11:33:25.644] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[4, 5]
[2024-09-29 11:33:25.644] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.5:47630	executionTimeUs=99	appId=local-1727609600949_1727609600899	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:33:25.645] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 0 ms, and get 5 blockIds for shuffleId[2], startPartition[4], endPartition[5]
[2024-09-29 11:33:25.645] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:33:25.648] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 6
[2024-09-29 11:33:25.648] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 6: Stage cancelled
[2024-09-29 11:33:25.648] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 0.0 in stage 6.0 (TID 19), reason: Stage cancelled
[2024-09-29 11:33:25.648] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Stage 6 was cancelled
[2024-09-29 11:33:25.648] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 1.0 in stage 6.0 (TID 20), reason: Stage cancelled
[2024-09-29 11:33:25.648] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 3.0 in stage 6.0 (TID 22), reason: Stage cancelled
[2024-09-29 11:33:25.648] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 4.0 in stage 6.0 (TID 23), reason: Stage cancelled
[2024-09-29 11:33:25.649] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ResultStage 6 (collectAsMap at RepartitionTest.java:99) failed in 0.016 s due to Job aborted due to stage failure: Task 2 in stage 6.0 failed 1 times, most recent failure: Lost task 2.0 in stage 6.0 (TID 21) (fv-az1023-204.cxk2etxb0jje1nvwrtfsy0dgjd.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609600949_1727609600899], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:33:25.649] [main] [INFO] DAGScheduler.logInfo - Job 1 failed: collectAsMap at RepartitionTest.java:99, took 0.787544 s

Check failure on line 0 in org.apache.uniffle.test.RepartitionWithMemoryRssTest

See this annotation in the file changed.

@github-actions github-actions / Test Results

1 out of 10 runs with error: testMemoryRelease (org.apache.uniffle.test.RepartitionWithMemoryRssTest)

artifacts/integration-reports-spark3/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithMemoryRssTest.xml [took 1m 10s]
Raw output
Job aborted due to stage failure: Task 2 in stage 2.0 failed 1 times, most recent failure: Lost task 2.0 in stage 2.0 (TID 12) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
 at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
 at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
 at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
 at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
 at org.apache.spark.scheduler.Task.run(Task.scala:131)
 at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 2 in stage 2.0 failed 1 times, most recent failure: Lost task 2.0 in stage 2.0 (TID 12) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2261)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:304)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:171)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:151)
	at org.apache.spark.rdd.OrderedRDDFunctions.$anonfun$sortByKey$1(OrderedRDDFunctions.scala:63)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:62)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:927)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:897)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:887)
	at org.apache.uniffle.test.RepartitionTest.repartitionApp(RepartitionTest.java:99)
	at org.apache.uniffle.test.RepartitionTest.runTest(RepartitionTest.java:49)
	at org.apache.uniffle.test.SparkIntegrationTestBase.runSparkApp(SparkIntegrationTestBase.java:102)
	at org.apache.uniffle.test.RepartitionWithMemoryRssTest.testMemoryRelease(RepartitionWithMemoryRssTest.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:36.858] [main] [INFO] RepartitionTest.generateTextFile - Create file:/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit1144292427118993793/wordcount.txt
[2024-09-29 11:33:38.914] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:36541/test/client_conf.
java.io.IOException: Filesystem closed
	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
	at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:38.975] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 1 applications.
[2024-09-29 11:33:41.475] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 1 applications.
[2024-09-29 11:33:41.475] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Remove expired application : local-1727609612043_1727609612016.
[2024-09-29 11:33:41.956] [DynamicClientConfService-0] [WARN] DynamicClientConfService.refreshClientConf - Error when update client conf with hdfs://localhost:36541/test/client_conf.
java.io.IOException: Filesystem closed
	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:466)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1628)
	at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900)
	at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961)
	at org.apache.uniffle.coordinator.conf.DynamicClientConfService.refreshClientConf(DynamicClientConfService.java:115)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:33:43.975] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 0 applications.
[2024-09-29 11:33:46.475] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 0 applications.
[2024-09-29 11:33:47.557] [main] [INFO] RepartitionTest.generateTextFile - finish test data for word count file:/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit1144292427118993793/wordcount.txt
[2024-09-29 11:33:47.557] [main] [INFO] SparkContext.logInfo - SparkContext already stopped.
[2024-09-29 11:33:47.558] [main] [INFO] SparkContext.logInfo - Running Spark version 3.1.2
[2024-09-29 11:33:47.558] [main] [INFO] ResourceUtils.logInfo - ==============================================================
[2024-09-29 11:33:47.558] [main] [INFO] ResourceUtils.logInfo - No custom resources configured for spark.driver.
[2024-09-29 11:33:47.558] [main] [INFO] ResourceUtils.logInfo - ==============================================================
[2024-09-29 11:33:47.558] [main] [INFO] SparkContext.logInfo - Submitted application: RepartitionWithMemoryRssTest
[2024-09-29 11:33:47.559] [main] [INFO] ResourceProfile.logInfo - Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 500, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
[2024-09-29 11:33:47.559] [main] [INFO] ResourceProfile.logInfo - Limiting resource is cpu
[2024-09-29 11:33:47.559] [main] [INFO] ResourceProfileManager.logInfo - Added ResourceProfile id: 0
[2024-09-29 11:33:47.559] [main] [INFO] SecurityManager.logInfo - Changing view acls to: runner
[2024-09-29 11:33:47.559] [main] [INFO] SecurityManager.logInfo - Changing modify acls to: runner
[2024-09-29 11:33:47.559] [main] [INFO] SecurityManager.logInfo - Changing view acls groups to: 
[2024-09-29 11:33:47.560] [main] [INFO] SecurityManager.logInfo - Changing modify acls groups to: 
[2024-09-29 11:33:47.560] [main] [INFO] SecurityManager.logInfo - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(runner); groups with view permissions: Set(); users  with modify permissions: Set(runner); groups with modify permissions: Set()
[2024-09-29 11:33:47.576] [main] [INFO] Utils.logInfo - Successfully started service 'sparkDriver' on port 34335.
[2024-09-29 11:33:47.577] [main] [INFO] SparkEnv.logInfo - Registering MapOutputTracker
[2024-09-29 11:33:47.577] [main] [INFO] RssShuffleManagerBase.<init> - Uniffle org.apache.spark.shuffle.RssShuffleManager version: 0.10.0-SNAPSHOT-1685cf7b
[2024-09-29 11:33:47.577] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Start to create coordinator clients from 10.1.0.19:19999
[2024-09-29 11:33:47.578] [main] [INFO] CoordinatorGrpcClient.<init> - Created CoordinatorGrpcClient, host:10.1.0.19, port:19999, maxRetryAttempts:3, usePlaintext:true
[2024-09-29 11:33:47.578] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Add coordinator client Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.578] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Finish create coordinator clients Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.580] [Grpc-4] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=fetchClientConfV2	statusCode=SUCCESS	from=/10.1.0.19:49560	executionTimeUs=57	appId=N/A
[2024-09-29 11:33:47.581] [main] [INFO] CoordinatorGrpcRetryableClient.lambda$fetchClientConf$6 - Success to get conf from Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.581] [main] [INFO] RssSparkShuffleUtils.applyDynamicClientConf - Use dynamic conf spark.rss.storage.type = MEMORY_LOCALFILE
[2024-09-29 11:33:47.582] [main] [INFO] RssShuffleManager.<init> - Check quorum config [1:1:1:true]
[2024-09-29 11:33:47.582] [main] [INFO] RssShuffleManager.<init> - Disable external shuffle service in RssShuffleManager.
[2024-09-29 11:33:47.582] [main] [INFO] RssShuffleManager.<init> - Disable local shuffle reader in RssShuffleManager.
[2024-09-29 11:33:47.582] [main] [INFO] RssShuffleManager.<init> - Disable shuffle data locality in RssShuffleManager.
[2024-09-29 11:33:47.583] [main] [INFO] RssShuffleManager.registerCoordinator - Start Registering coordinators 10.1.0.19:19999
[2024-09-29 11:33:47.583] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Start to create coordinator clients from 10.1.0.19:19999
[2024-09-29 11:33:47.583] [main] [INFO] CoordinatorGrpcClient.<init> - Created CoordinatorGrpcClient, host:10.1.0.19, port:19999, maxRetryAttempts:3, usePlaintext:true
[2024-09-29 11:33:47.583] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Add coordinator client Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.583] [main] [INFO] CoordinatorClientFactory.createCoordinatorClient - Finish create coordinator clients Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.583] [main] [INFO] RssShuffleManager.<init> - Rss data pusher is starting...
[2024-09-29 11:33:47.584] [main] [INFO] SparkEnv.logInfo - Registering BlockManagerMaster
[2024-09-29 11:33:47.584] [main] [INFO] BlockManagerMasterEndpoint.logInfo - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
[2024-09-29 11:33:47.585] [main] [INFO] BlockManagerMasterEndpoint.logInfo - BlockManagerMasterEndpoint up
[2024-09-29 11:33:47.585] [main] [INFO] SparkEnv.logInfo - Registering BlockManagerMasterHeartbeat
[2024-09-29 11:33:47.585] [main] [INFO] DiskBlockManager.logInfo - Created local directory at /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/blockmgr-0e5d6cb0-4678-48a9-bd05-3745fe115039
[2024-09-29 11:33:47.585] [main] [INFO] MemoryStore.logInfo - MemoryStore started with capacity 2.5 GiB
[2024-09-29 11:33:47.587] [main] [INFO] SparkEnv.logInfo - Registering OutputCommitCoordinator
[2024-09-29 11:33:47.605] [main] [INFO] Executor.logInfo - Starting executor ID driver on host fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net
[2024-09-29 11:33:47.606] [main] [INFO] Utils.logInfo - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44921.
[2024-09-29 11:33:47.606] [main] [INFO] NettyBlockTransferService.init - Server created on fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net:44921
[2024-09-29 11:33:47.606] [main] [INFO] BlockManager.logInfo - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
[2024-09-29 11:33:47.607] [main] [INFO] BlockManagerMaster.logInfo - Registering BlockManager BlockManagerId(driver, fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, 44921, None)
[2024-09-29 11:33:47.607] [dispatcher-BlockManagerMaster] [INFO] BlockManagerMasterEndpoint.logInfo - Registering block manager fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net:44921 with 2.5 GiB RAM, BlockManagerId(driver, fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, 44921, None)
[2024-09-29 11:33:47.607] [main] [INFO] BlockManagerMaster.logInfo - Registered BlockManager BlockManagerId(driver, fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, 44921, None)
[2024-09-29 11:33:47.608] [main] [INFO] BlockManager.logInfo - Initialized BlockManager: BlockManagerId(driver, fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, 44921, None)
[2024-09-29 11:33:47.610] [main] [INFO] SharedState.logInfo - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/spark-warehouse').
[2024-09-29 11:33:47.610] [main] [INFO] SharedState.logInfo - Warehouse path is 'file:/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/spark-warehouse'.
[2024-09-29 11:33:47.613] [main] [INFO] InMemoryFileIndex.logInfo - It took 0 ms to list leaf files for 1 paths.
[2024-09-29 11:33:47.626] [main] [INFO] FileSourceStrategy.logInfo - Pushed Filters: 
[2024-09-29 11:33:47.627] [main] [INFO] FileSourceStrategy.logInfo - Post-Scan Filters: 
[2024-09-29 11:33:47.627] [main] [INFO] FileSourceStrategy.logInfo - Output Data Schema: struct<value: string>
[2024-09-29 11:33:47.628] [main] [INFO] MemoryStore.logInfo - Block broadcast_0 stored as values in memory (estimated size 338.7 KiB, free 2.5 GiB)
[2024-09-29 11:33:47.635] [main] [INFO] MemoryStore.logInfo - Block broadcast_0_piece0 stored as bytes in memory (estimated size 29.2 KiB, free 2.5 GiB)
[2024-09-29 11:33:47.635] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_0_piece0 in memory on fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net:44921 (size: 29.2 KiB, free: 2.5 GiB)
[2024-09-29 11:33:47.635] [main] [INFO] SparkContext.logInfo - Created broadcast 0 from javaRDD at RepartitionTest.java:95
[2024-09-29 11:33:47.636] [main] [INFO] FileSourceScanExec.logInfo - Planning scan with bin packing, max size: 134217728 bytes, open cost is considered as scanning 4194304 bytes.
[2024-09-29 11:33:47.642] [main] [INFO] RssShuffleManager.registerShuffle - Generate application id used in rss: local-1727609627604_1727609627577
[2024-09-29 11:33:47.644] [Grpc-7] [INFO] CoordinatorGrpcService.getShuffleAssignments - Request of getShuffleAssignments for appId[local-1727609627604_1727609627577], shuffleId[0], partitionNum[5], partitionNumPerRange[1], replica[1], requiredTags[[ss_v5, GRPC]], requiredShuffleServerNumber[-1], faultyServerIds[0], stageId[-1], stageAttemptNumber[0], isReassign[false]
[2024-09-29 11:33:47.644] [Grpc-7] [WARN] PartitionBalanceAssignmentStrategy.assign - Can't get expected servers [9] and found only [1]
[2024-09-29 11:33:47.644] [Grpc-7] [INFO] CoordinatorGrpcService.logAssignmentResult - Shuffle Servers of assignment for appId[local-1727609627604_1727609627577], shuffleId[0] are [10.1.0.19-20014]
[2024-09-29 11:33:47.644] [Grpc-7] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=getShuffleAssignments	statusCode=SUCCESS	from=/10.1.0.19:49562	executionTimeUs=445	appId=local-1727609627604_1727609627577	args{shuffleId=0, partitionNum=5, partitionNumPerRange=1, replica=1, requiredTags=[ss_v5, GRPC], requiredShuffleServerNumber=-1, faultyServerIds=[], stageId=-1, stageAttemptNumber=0, isReassign=false}
[2024-09-29 11:33:47.645] [main] [INFO] CoordinatorGrpcRetryableClient.lambda$getShuffleAssignments$4 - Success to get shuffle server assignment from Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:33:47.645] [main] [INFO] RssShuffleManagerBase.registerShuffleServers - Start to register shuffleId 0
[2024-09-29 11:33:47.646] [Grpc-0] [INFO] ShuffleServerGrpcService.registerShuffle - Get register request for appId[local-1727609627604_1727609627577], shuffleId[0], remoteStorage[] with 5 partition ranges. User: runner
[2024-09-29 11:33:47.646] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=registerShuffle	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=240	appId=local-1727609627604_1727609627577	shuffleId=0	args{remoteStoragePath=, user=runner, stageAttemptNumber=0}
[2024-09-29 11:33:47.646] [main] [INFO] RssShuffleManagerBase.registerShuffleServers - Finish register shuffleId 0 with 1 ms
[2024-09-29 11:33:47.647] [Grpc-9] [INFO] ApplicationManager.registerApplicationInfo - New application is registered: local-1727609627604_1727609627577
[2024-09-29 11:33:47.647] [Grpc-9] [INFO] COORDINATOR_RPC_AUDIT_LOG.close - cmd=regis…77	shuffleId=1	args{requireBufferId=629, timestamp=1727609686179, stageAttemptNumber=0, shuffleDataSize=1}
[2024-09-29 11:34:46.284] [Grpc-4] [INFO] ShuffleServerGrpcService.getLocalShuffleData - Successfully getShuffleData cost 0 ms for shuffle data with appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4]offset[125890316]length[2162554]
[2024-09-29 11:34:46.303] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getLocalShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=18885	appId=local-1727609627604_1727609627577	shuffleId=0	args{partitionId=4, partitionNumPerRange=1, partitionNum=5, offset=125890316, length=2162554}	return{len=2162554}
[2024-09-29 11:34:46.305] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleServerGrpcClient.getShuffleData - GetShuffleData from 10.1.0.19:20014 for appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4] cost 21 ms
[2024-09-29 11:34:46.459] [Grpc-7] [INFO] ShuffleServerGrpcService.getLocalShuffleData - Successfully getShuffleData cost 0 ms for shuffle data with appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4]offset[128052870]length[2162773]
[2024-09-29 11:34:46.460] [Grpc-7] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getLocalShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=1386	appId=local-1727609627604_1727609627577	shuffleId=0	args{partitionId=4, partitionNumPerRange=1, partitionNum=5, offset=128052870, length=2162773}	return{len=2162773}
[2024-09-29 11:34:46.462] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleServerGrpcClient.getShuffleData - GetShuffleData from 10.1.0.19:20014 for appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4] cost 3 ms
[2024-09-29 11:34:46.475] [ApplicationManager-0] [INFO] ApplicationManager.statusCheck - Start to check status for 1 applications.
[2024-09-29 11:34:46.617] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=37	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireSize=1555511, partitionIdsSize=1}	return{requireBufferId=630}
[2024-09-29 11:34:46.620] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=603	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireBufferId=630, timestamp=1727609686617, stageAttemptNumber=0, shuffleDataSize=1}
[2024-09-29 11:34:46.622] [Grpc-4] [INFO] ShuffleServerGrpcService.getLocalShuffleData - Successfully getShuffleData cost 0 ms for shuffle data with appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4]offset[130215643]length[2163406]
[2024-09-29 11:34:46.623] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getLocalShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=1455	appId=local-1727609627604_1727609627577	shuffleId=0	args{partitionId=4, partitionNumPerRange=1, partitionNum=5, offset=130215643, length=2163406}	return{len=2163406}
[2024-09-29 11:34:46.624] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleServerGrpcClient.getShuffleData - GetShuffleData from 10.1.0.19:20014 for appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4] cost 3 ms
[2024-09-29 11:34:46.647] [Grpc-7] [INFO] ShuffleServerGrpcService.appHeartbeat - Get heartbeat from local-1727609627604_1727609627577
[2024-09-29 11:34:46.648] [client-heartbeat-3] [INFO] CoordinatorGrpcRetryableClient.lambda$sendAppHeartBeat$0 - Successfully send heartbeat to Coordinator grpc client ref to 10.1.0.19:19999
[2024-09-29 11:34:46.648] [rss-heartbeat-0] [INFO] RssShuffleManager.lambda$startHeartbeat$2 - Finish send heartbeat to coordinator and servers
[2024-09-29 11:34:46.779] [Grpc-9] [INFO] ShuffleServerGrpcService.getLocalShuffleData - Successfully getShuffleData cost 0 ms for shuffle data with appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4]offset[132379049]length[2162215]
[2024-09-29 11:34:46.781] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getLocalShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=2639	appId=local-1727609627604_1727609627577	shuffleId=0	args{partitionId=4, partitionNumPerRange=1, partitionNum=5, offset=132379049, length=2162215}	return{len=2162215}
[2024-09-29 11:34:46.783] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleServerGrpcClient.getShuffleData - GetShuffleData from 10.1.0.19:20014 for appId[local-1727609627604_1727609627577], shuffleId[0], partitionId[4] cost 5 ms
[2024-09-29 11:34:46.884] [Grpc-3] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=33	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireSize=1509022, partitionIdsSize=1}	return{requireBufferId=631}
[2024-09-29 11:34:46.886] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=577	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireBufferId=631, timestamp=1727609686884, stageAttemptNumber=0, shuffleDataSize=1}
[2024-09-29 11:34:46.911] [Grpc-7] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=26	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireSize=1513539, partitionIdsSize=1}	return{requireBufferId=632}
[2024-09-29 11:34:46.913] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=566	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireBufferId=632, timestamp=1727609686911, stageAttemptNumber=0, shuffleDataSize=1}
[2024-09-29 11:34:46.957] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleReadClientImpl.logStatics - Metrics for shuffleId[0], partitionId[4], read data cost 305 ms, copy data cost 0 ms, crc check cost 17 ms
[2024-09-29 11:34:46.957] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 78 blocks from [ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}], Consumed[ hot:16 warm:62 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:34:46.957] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 161936581 bytes from [ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}], Consumed[ hot:27395317 warm:134541264 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:34:46.957] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ComposedClientReadHandler.logConsumedBlockInfo - Client read 312939986 uncompressed bytes from [ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}], Consumed[ hot:52899953 warm:260040033 cold:0 frozen:0 ], Skipped[ hot:0 warm:0 cold:0 frozen:0 ]
[2024-09-29 11:34:46.957] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] RssShuffleDataIterator.hasNext - Fetch 161936581 bytes cost 322 ms and 4 ms to serialize, 245 ms to decompress with unCompressionLength[312939986]
[2024-09-29 11:34:46.966] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] WriteBufferManager.clear - Flush total buffer for shuffleId[1] with allocated[23068672], dataSize[6228667], memoryUsed[6815744], number of blocks[5], flush ratio[1.0]
[2024-09-29 11:34:46.966] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=requireBuffer	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=36	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireSize=2280357, partitionIdsSize=5}	return{requireBufferId=633}
[2024-09-29 11:34:46.969] [Grpc-3] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=sendShuffleData	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=878	appId=local-1727609627604_1727609627577	shuffleId=1	args{requireBufferId=633, timestamp=1727609686966, stageAttemptNumber=0, shuffleDataSize=5}
[2024-09-29 11:34:46.969] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] RssShuffleWriter.writeImpl - Finish write shuffle for appId[local-1727609627604_1727609627577], shuffleId[1], taskId[9_0] with write 5050 ms, include checkSendResult[3], commit[0], WriteBufferManager cost copyTime[8], writeTime[5038], serializeTime[3956], compressTime[220], estimateTime[0], requireMemoryTime[0], uncompressedDataLen[173998407]
[2024-09-29 11:34:46.969] [Grpc-1] [INFO] ShuffleServerGrpcService.reportShuffleResult - Accepted blockIds report for 45 blocks across 5 partitions as shuffle result for task appId[local-1727609627604_1727609627577], shuffleId[1], taskAttemptId[16]
[2024-09-29 11:34:46.969] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=134	appId=local-1727609627604_1727609627577	shuffleId=1	args{taskAttemptId=16, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=45, expectedBlockCount=45}
[2024-09-29 11:34:46.969] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.19], grpc port[20014]} for appId[local-1727609627604_1727609627577], shuffleId[1] successfully
[2024-09-29 11:34:46.969] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[16] with bitmapNum[1] cost 0 ms
[2024-09-29 11:34:46.970] [Executor task launch worker for task 4.0 in stage 1.0 (TID 9)] [INFO] Executor.logInfo - Finished task 4.0 in stage 1.0 (TID 9). 1293 bytes result sent to driver
[2024-09-29 11:34:46.970] [task-result-getter-1] [INFO] TaskSetManager.logInfo - Finished task 4.0 in stage 1.0 (TID 9) in 12278 ms on fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net (executor driver) (5/5)
[2024-09-29 11:34:46.970] [task-result-getter-1] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 1.0, whose tasks have all completed, from pool 
[2024-09-29 11:34:46.970] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 1 (repartition at RepartitionTest.java:97) finished in 35.165 s
[2024-09-29 11:34:46.970] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - looking for newly runnable stages
[2024-09-29 11:34:46.970] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - running: Set()
[2024-09-29 11:34:46.970] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - waiting: Set(ResultStage 2)
[2024-09-29 11:34:46.970] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - failed: Set()
[2024-09-29 11:34:46.971] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ResultStage 2 (MapPartitionsRDD[13] at sortByKey at RepartitionTest.java:99), which has no missing parents
[2024-09-29 11:34:46.971] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_5 stored as values in memory (estimated size 7.5 KiB, free 2.5 GiB)
[2024-09-29 11:34:46.973] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_5_piece0 stored as bytes in memory (estimated size 4.0 KiB, free 2.5 GiB)
[2024-09-29 11:34:46.973] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_5_piece0 in memory on fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net:44921 (size: 4.0 KiB, free: 2.5 GiB)
[2024-09-29 11:34:46.973] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 5 from broadcast at DAGScheduler.scala:1388
[2024-09-29 11:34:46.973] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 5 missing tasks from ResultStage 2 (MapPartitionsRDD[13] at sortByKey at RepartitionTest.java:99) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
[2024-09-29 11:34:46.973] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 2.0 with 5 tasks resource profile 0
[2024-09-29 11:34:46.974] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 2.0 (TID 10) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, executor driver, partition 0, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:34:46.974] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 1.0 in stage 2.0 (TID 11) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, executor driver, partition 1, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:34:46.974] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 2.0 in stage 2.0 (TID 12) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, executor driver, partition 2, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:34:46.974] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 3.0 in stage 2.0 (TID 13) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, executor driver, partition 3, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:34:46.974] [Executor task launch worker for task 0.0 in stage 2.0 (TID 10)] [INFO] Executor.logInfo - Running task 0.0 in stage 2.0 (TID 10)
[2024-09-29 11:34:46.974] [Executor task launch worker for task 1.0 in stage 2.0 (TID 11)] [INFO] Executor.logInfo - Running task 1.0 in stage 2.0 (TID 11)
[2024-09-29 11:34:46.974] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [INFO] Executor.logInfo - Running task 2.0 in stage 2.0 (TID 12)
[2024-09-29 11:34:46.975] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[1], partitionId[2, 3]
[2024-09-29 11:34:46.975] [Executor task launch worker for task 0.0 in stage 2.0 (TID 10)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[1], partitionId[0, 1]
[2024-09-29 11:34:46.975] [Executor task launch worker for task 1.0 in stage 2.0 (TID 11)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[1], partitionId[1, 2]
[2024-09-29 11:34:46.975] [Executor task launch worker for task 3.0 in stage 2.0 (TID 13)] [INFO] Executor.logInfo - Running task 3.0 in stage 2.0 (TID 13)
[2024-09-29 11:34:46.976] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=249	appId=local-1727609627604_1727609627577	shuffleId=1	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=425}	context{bitmap[0].<size,byte>=<225,1812>, partitionBlockCount=70}
[2024-09-29 11:34:46.976] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=255	appId=local-1727609627604_1727609627577	shuffleId=1	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=215}	context{bitmap[0].<size,byte>=<225,1812>, partitionBlockCount=35}
[2024-09-29 11:34:46.976] [Grpc-5] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609627604_1727609627577], shuffleId[1], partitions[2]
java.lang.ArrayIndexOutOfBoundsException: 7
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:34:46.976] [Executor task launch worker for task 3.0 in stage 2.0 (TID 13)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[1], partitionId[3, 4]
[2024-09-29 11:34:46.976] [Grpc-5] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.19:47260	executionTimeUs=580	appId=local-1727609627604_1727609627577	shuffleId=1	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:34:46.976] [Executor task launch worker for task 1.0 in stage 2.0 (TID 11)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 35 blockIds for shuffleId[1], startPartition[1], endPartition[2]
[2024-09-29 11:34:46.976] [Executor task launch worker for task 0.0 in stage 2.0 (TID 10)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 70 blockIds for shuffleId[1], startPartition[0], endPartition[1]
[2024-09-29 11:34:46.976] [Executor task launch worker for task 1.0 in stage 2.0 (TID 11)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:34:46.976] [Executor task launch worker for task 0.0 in stage 2.0 (TID 10)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:34:46.976] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.19:20014 for [appId=local-1727609627604_1727609627577, shuffleId=1, errorMsg:7
[2024-09-29 11:34:46.977] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.19], grpc port[20014]} for appId[local-1727609627604_1727609627577], shuffleId[1], requestPartitions[2]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.19:20014 for [appId=local-1727609627604_1727609627577, shuffleId=1, errorMsg:7
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:34:46.977] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=1, inventory={0={0=[ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.19], grpc port[20014]}]}}, succeedList={}}
[2024-09-29 11:34:46.977] [Executor task launch worker for task 2.0 in stage 2.0 (TID 12)] [ERROR] Executor.logError - Exception in task 2.0 in stage 2.0 (TID 12)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:34:46.979] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 4.0 in stage 2.0 (TID 14) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net, executor driver, partition 4, ANY, 4271 bytes) taskResourceAssignments Map()
[2024-09-29 11:34:46.979] [task-result-getter-2] [WARN] TaskSetManager.logWarning - Lost task 2.0 in stage 2.0 (TID 12) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

[2024-09-29 11:34:46.980] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=141	appId=local-1727609627604_1727609627577	shuffleId=1	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=245}	context{bitmap[0].<size,byte>=<225,1812>, partitionBlockCount=40}
[2024-09-29 11:34:46.980] [task-result-getter-2] [ERROR] TaskSetManager.logError - Task 2 in stage 2.0 failed 1 times; aborting job
[2024-09-29 11:34:46.980] [Executor task launch worker for task 3.0 in stage 2.0 (TID 13)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 4 ms, and get 40 blockIds for shuffleId[1], startPartition[3], endPartition[4]
[2024-09-29 11:34:46.980] [Executor task launch worker for task 3.0 in stage 2.0 (TID 13)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:34:46.986] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 2
[2024-09-29 11:34:46.986] [Executor task launch worker for task 4.0 in stage 2.0 (TID 14)] [INFO] Executor.logInfo - Running task 4.0 in stage 2.0 (TID 14)
[2024-09-29 11:34:46.986] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 2: Stage cancelled
[2024-09-29 11:34:46.987] [Executor task launch worker for task 4.0 in stage 2.0 (TID 14)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[1], partitionId[4, 5]
[2024-09-29 11:34:46.988] [Grpc-9] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.19:47260	executionTimeUs=160	appId=local-1727609627604_1727609627577	shuffleId=1	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=245}	context{bitmap[0].<size,byte>=<225,1812>, partitionBlockCount=40}
[2024-09-29 11:34:46.988] [Executor task launch worker for task 4.0 in stage 2.0 (TID 14)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 40 blockIds for shuffleId[1], startPartition[4], endPartition[5]
[2024-09-29 11:34:46.988] [Executor task launch worker for task 4.0 in stage 2.0 (TID 14)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:34:46.990] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Stage 2 was cancelled
[2024-09-29 11:34:46.990] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 3.0 in stage 2.0 (TID 13), reason: Stage cancelled
[2024-09-29 11:34:46.991] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 0.0 in stage 2.0 (TID 10), reason: Stage cancelled
[2024-09-29 11:34:46.991] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 4.0 in stage 2.0 (TID 14), reason: Stage cancelled
[2024-09-29 11:34:46.991] [dispatcher-event-loop-3] [INFO] Executor.logInfo - Executor is trying to kill task 1.0 in stage 2.0 (TID 11), reason: Stage cancelled
[2024-09-29 11:34:46.991] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ResultStage 2 (sortByKey at RepartitionTest.java:99) failed in 0.020 s due to Job aborted due to stage failure: Task 2 in stage 2.0 failed 1 times, most recent failure: Lost task 2.0 in stage 2.0 (TID 12) (fv-az1981-442.j4jimiuknh1ebg1ycgmowphjbf.bx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609627604_1727609627577], shuffleId[1]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:34:46.995] [main] [INFO] DAGScheduler.logInfo - Job 0 failed: sortByKey at RepartitionTest.java:99, took 59.339903 s

Check failure on line 0 in org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest

See this annotation in the file changed.

@github-actions github-actions / Test Results

2 out of 10 runs with error: resultCompareTest (org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest)

artifacts/integration-reports-spark3.4/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest.xml [took 22s]
artifacts/integration-reports-spark3.5-scala2.13/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest.xml [took 36s]
Raw output
Job aborted due to stage failure: Task 3 in stage 1.0 failed 1 times, most recent failure: Lost task 3.0 in stage 1.0 (TID 7) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
 at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
 at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
 at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
 at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
 at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
 at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
 at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
 at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
 at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
 at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
 at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
 at org.apache.spark.scheduler.Task.run(Task.scala:139)
 at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 3 in stage 1.0 failed 1 times, most recent failure: Lost task 3.0 in stage 1.0 (TID 7) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2303)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2328)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1019)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:405)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1018)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:320)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:187)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:167)
	at org.apache.spark.rdd.OrderedRDDFunctions.$anonfun$sortByKey$1(OrderedRDDFunctions.scala:64)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:405)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:63)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:927)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:897)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:887)
	at org.apache.uniffle.test.RepartitionTest.repartitionApp(RepartitionTest.java:99)
	at org.apache.uniffle.test.RepartitionTest.runTest(RepartitionTest.java:49)
	at org.apache.uniffle.test.SparkIntegrationTestBase.runSparkApp(SparkIntegrationTestBase.java:102)
	at org.apache.uniffle.test.SparkIntegrationTestBase.run(SparkIntegrationTestBase.java:62)
	at org.apache.uniffle.test.RepartitionTest.resultCompareTest(RepartitionTest.java:44)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:29:40.232] [main] [INFO] RssUtils.getHostIp - ip fe80:0:0:0:20d:3aff:fe03:daa%eth0 was filtered, because it's just a ipv6 address
[2024-09-29 11:29:40.237] [main] [INFO] RssUtils.getHostIp - ip 10.1.0.12 was candidate, if there is no better choice, we will choose it
[2024-09-29 11:29:40.343] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
[2024-09-29 11:29:40.702] [main] [WARN] NativeCodeLoader.<clinit> - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: testClusterID
[2024-09-29 11:29:40.728] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:29:40.734] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:29:40.735] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:29:40.737] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:29:40.762] [main] [INFO] deprecation.logDeprecation - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
[2024-09-29 11:29:40.762] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:29:40.762] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:29:40.763] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:29:40.763] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:29:40
[2024-09-29 11:29:40.765] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:29:40.765] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:40.766] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:29:40.767] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:29:40.785] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:29:40.787] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:29:40.787] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:29:40.788] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:29:40.788] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:29:40.788] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:29:40.788] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:29:40.788] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:29:40.794] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:29:40.794] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:29:40.794] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:29:40.794] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:29:40.796] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:29:40.852] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:29:40.852] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:40.853] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:29:40.853] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:29:40.858] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:29:40.859] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:29:40.859] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:29:40.866] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:29:40.866] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:40.866] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:29:40.866] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:29:40.868] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:29:40.868] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:29:40.868] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:29:40.872] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:29:40.872] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:29:40.872] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:29:40.876] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:29:40.876] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:29:40.878] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:29:40.878] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:40.878] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:29:40.879] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:29:40.896] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-773150727-127.0.0.1-1727609380888
[2024-09-29 11:29:40.903] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name1 has been successfully formatted.
[2024-09-29 11:29:40.905] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name2 has been successfully formatted.
[2024-09-29 11:29:40.913] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:29:40.913] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:29:40.987] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:29:40.987] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit7306607909744417576/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:29:40.999] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2024-09-29 11:29:41.001] [main] [INFO] NameNode.createNameNode - createNameNode []
[2024-09-29 11:29:41.024] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2024-09-29 11:29:41.036] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2024-09-29 11:29:41.037] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2024-09-29 11:29:41.059] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2024-09-29 11:29:41.074] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@25243bc1] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2024-09-29 11:29:41.084] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2024-09-29 11:29:41.129] [main] [INFO] log.info - Logging to org.apache.logging.slf4j.Log4jLogger@4943defe via org.mortbay.log.Slf4jLog
[2024-09-29 11:29:41.138] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2024-09-29 11:29:41.144] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2024-09-29 11:29:41.149] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2024-09-29 11:29:41.151] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2024-09-29 11:29:41.151] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2024-09-29 11:29:41.256] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2024-09-29 11:29:41.257] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2024-09-29 11:29:41.269] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 39549
[2024-09-29 11:29:41.269] [main] [INFO] log.info - jetty-6.1.26
[2024-09-29 11:29:41.288] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/Jetty_localhost_39549_hdfs____.6uf08y/webapp
[2024-09-29 11:29:41.387] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39549
[2024-09-29 11:29:41.392] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:29:41.392] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:29:41.392] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:29:41.392] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:29:41.393] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:29:41.393] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:29:41.394] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:29:41.394] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:29:41
[2024-09-29 11:29:41.394] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:29:41.394] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:41.394] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:29:41.395] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:29:41.411] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:29:41.412] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:29:41.412] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:29:41.412] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:29:41.412] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:29:41.413] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:29:41.413] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:29:41.413] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:29:41.413] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:29:41.414] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:29:41.414] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:29:41.414] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:29:41.414] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:29:41.415] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:29:41.415] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:41.415] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:29:41.416] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:29:41.416] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:29:41.417] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:29:41.417] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:29:41.417] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:29:41.417] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:29:41.418] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:29:41.418] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:29:41.418] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.999…Set()
[2024-09-29 11:30:07.471] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ShuffleMapStage 1 (MapPartitionsRDD[10] at repartition at RepartitionTest.java:97), which has no missing parents
[2024-09-29 11:30:07.474] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_4 stored as values in memory (estimated size 7.3 KiB, free 2.5 GiB)
[2024-09-29 11:30:07.476] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 2.5 GiB)
[2024-09-29 11:30:07.476] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_4_piece0 in memory on fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net:45963 (size: 4.1 KiB, free: 2.5 GiB)
[2024-09-29 11:30:07.477] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 4 from broadcast at DAGScheduler.scala:1535
[2024-09-29 11:30:07.477] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 5 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[10] at repartition at RepartitionTest.java:97) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
[2024-09-29 11:30:07.477] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 1.0 with 5 tasks resource profile 0
[2024-09-29 11:30:07.478] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 1.0 (TID 4) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver, partition 0, ANY, 7446 bytes) 
[2024-09-29 11:30:07.479] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 1.0 in stage 1.0 (TID 5) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver, partition 1, ANY, 7446 bytes) 
[2024-09-29 11:30:07.479] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 2.0 in stage 1.0 (TID 6) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver, partition 2, ANY, 7446 bytes) 
[2024-09-29 11:30:07.479] [dispatcher-event-loop-0] [INFO] TaskSetManager.logInfo - Starting task 3.0 in stage 1.0 (TID 7) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver, partition 3, ANY, 7446 bytes) 
[2024-09-29 11:30:07.479] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] Executor.logInfo - Running task 0.0 in stage 1.0 (TID 4)
[2024-09-29 11:30:07.479] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] Executor.logInfo - Running task 1.0 in stage 1.0 (TID 5)
[2024-09-29 11:30:07.480] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] Executor.logInfo - Running task 2.0 in stage 1.0 (TID 6)
[2024-09-29 11:30:07.480] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] Executor.logInfo - Running task 3.0 in stage 1.0 (TID 7)
[2024-09-29 11:30:07.482] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[12] data with RssHandle[appId local-1727609399086_1727609399030, shuffleId 1].
[2024-09-29 11:30:07.482] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[8] data with RssHandle[appId local-1727609399086_1727609399030, shuffleId 1].
[2024-09-29 11:30:07.482] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[4] data with RssHandle[appId local-1727609399086_1727609399030, shuffleId 1].
[2024-09-29 11:30:07.483] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[0] data with RssHandle[appId local-1727609399086_1727609399030, shuffleId 1].
[2024-09-29 11:30:07.490] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleManager.getReader - Get taskId cost 1 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[1, 2]
[2024-09-29 11:30:07.493] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[2, 3]
[2024-09-29 11:30:07.495] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[0, 1]
[2024-09-29 11:30:07.496] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[3, 4]
[2024-09-29 11:30:07.499] [Grpc-2] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609399086_1727609399030], shuffleId[0], partitions[0]
java.lang.ArrayIndexOutOfBoundsException: 0
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.499] [Grpc-3] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609399086_1727609399030], shuffleId[0], partitions[3]
java.lang.ArrayIndexOutOfBoundsException: 0
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.501] [Grpc-3] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.12:37144	executionTimeUs=3647	appId=local-1727609399086_1727609399030	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:30:07.501] [Grpc-2] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.12:37144	executionTimeUs=3743	appId=local-1727609399086_1727609399030	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:30:07.502] [Grpc-4] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.12:37144	executionTimeUs=4563	appId=local-1727609399086_1727609399030	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:30:07.502] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.12:20001 for [appId=local-1727609399086_1727609399030, shuffleId=0, errorMsg:0
[2024-09-29 11:30:07.502] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.12], grpc port[20001]} for appId[local-1727609399086_1727609399030], shuffleId[0], requestPartitions[3]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.12:20001 for [appId=local-1727609399086_1727609399030, shuffleId=0, errorMsg:0
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.503] [Grpc-7] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.12:37144	executionTimeUs=259	appId=local-1727609399086_1727609399030	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:30:07.503] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 12 ms, and get 7 blockIds for shuffleId[0], startPartition[1], endPartition[2]
[2024-09-29 11:30:07.503] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=0, inventory={0={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}}, succeedList={}}
[2024-09-29 11:30:07.503] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 9 ms, and get 7 blockIds for shuffleId[0], startPartition[2], endPartition[3]
[2024-09-29 11:30:07.503] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.12:20001 for [appId=local-1727609399086_1727609399030, shuffleId=0, errorMsg:0
[2024-09-29 11:30:07.503] [Executor task launch worker for task 2.0 in stage 1.0 (TID 6)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:39857/rss/test,empty conf
[2024-09-29 11:30:07.503] [Executor task launch worker for task 1.0 in stage 1.0 (TID 5)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:39857/rss/test,empty conf
[2024-09-29 11:30:07.504] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [INFO] RssShuffleManager.markFailedTask - Mark the task: 7_0 failed.
[2024-09-29 11:30:07.504] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.12], grpc port[20001]} for appId[local-1727609399086_1727609399030], shuffleId[0], requestPartitions[0]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.12:20001 for [appId=local-1727609399086_1727609399030, shuffleId=0, errorMsg:0
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.504] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=0, inventory={0={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.12], grpc port[20001]}]}}, succeedList={}}
[2024-09-29 11:30:07.504] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [INFO] RssShuffleManager.markFailedTask - Mark the task: 4_0 failed.
[2024-09-29 11:30:07.506] [Executor task launch worker for task 3.0 in stage 1.0 (TID 7)] [ERROR] Executor.logError - Exception in task 3.0 in stage 1.0 (TID 7)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.514] [Executor task launch worker for task 0.0 in stage 1.0 (TID 4)] [ERROR] Executor.logError - Exception in task 0.0 in stage 1.0 (TID 4)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:07.517] [dispatcher-event-loop-2] [INFO] TaskSetManager.logInfo - Starting task 4.0 in stage 1.0 (TID 8) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver, partition 4, ANY, 7446 bytes) 
[2024-09-29 11:30:07.519] [task-result-getter-0] [WARN] TaskSetManager.logWarning - Lost task 3.0 in stage 1.0 (TID 7) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

[2024-09-29 11:30:07.520] [task-result-getter-0] [ERROR] TaskSetManager.logError - Task 3 in stage 1.0 failed 1 times; aborting job
[2024-09-29 11:30:07.521] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] Executor.logInfo - Running task 4.0 in stage 1.0 (TID 8)
[2024-09-29 11:30:07.522] [task-result-getter-1] [INFO] TaskSetManager.logInfo - Lost task 0.0 in stage 1.0 (TID 4) on fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net, executor driver: org.apache.uniffle.common.exception.RssFetchFailedException (Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]) [duplicate 1]
[2024-09-29 11:30:07.522] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleWriter.<init> - RssShuffle start write taskAttemptId[16] data with RssHandle[appId local-1727609399086_1727609399030, shuffleId 1].
[2024-09-29 11:30:07.523] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 1
[2024-09-29 11:30:07.524] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 1: Stage cancelled
[2024-09-29 11:30:07.526] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 1.0 in stage 1.0 (TID 5), reason: Stage cancelled
[2024-09-29 11:30:07.526] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 2.0 in stage 1.0 (TID 6), reason: Stage cancelled
[2024-09-29 11:30:07.526] [dispatcher-event-loop-2] [INFO] Executor.logInfo - Executor is trying to kill task 4.0 in stage 1.0 (TID 8), reason: Stage cancelled
[2024-09-29 11:30:07.527] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Stage 1 was cancelled
[2024-09-29 11:30:07.528] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 1 (repartition at RepartitionTest.java:97) failed in 0.055 s due to Job aborted due to stage failure: Task 3 in stage 1.0 failed 1 times, most recent failure: Lost task 3.0 in stage 1.0 (TID 7) (fv-az889-678.0uloxy4xllhuhla4wvomrgtxqg.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609399086_1727609399030], shuffleId[0]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
	at org.apache.spark.rdd.CoalescedRDD.$anonfun$compute$1(CoalescedRDD.scala:99)
	at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.writeImpl(RssShuffleWriter.java:316)
	at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:287)
	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
	at org.apache.spark.scheduler.Task.run(Task.scala:139)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:30:07.529] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 4 tasks for shuffleId[0], partitionId[4, 5]
[2024-09-29 11:30:07.530] [main] [INFO] DAGScheduler.logInfo - Job 0 failed: sortByKey at RepartitionTest.java:99, took 8.140526 s
[2024-09-29 11:30:07.534] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.12:37144	executionTimeUs=338	appId=local-1727609399086_1727609399030	shuffleId=0	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=59}	context{bitmap[0].<size,byte>=<35,310>, partitionBlockCount=7}
[2024-09-29 11:30:07.535] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 6 ms, and get 7 blockIds for shuffleId[0], startPartition[4], endPartition[5]
[2024-09-29 11:30:07.535] [Executor task launch worker for task 4.0 in stage 1.0 (TID 8)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage hdfs://localhost:39857/rss/test,empty conf

Check failure on line 0 in org.apache.uniffle.test.RepartitionWithLocalFileRssTest

See this annotation in the file changed.

@github-actions github-actions / Test Results

1 out of 10 runs with error: resultCompareTest (org.apache.uniffle.test.RepartitionWithLocalFileRssTest)

artifacts/integration-reports-spark3.5/integration-test/spark-common/target/surefire-reports/TEST-org.apache.uniffle.test.RepartitionWithLocalFileRssTest.xml [took 1m 2s]
Raw output
Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
 at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
 at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
 at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
 at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
 at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
 at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
 at org.apache.spark.scheduler.Task.run(Task.scala:141)
 at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
 at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
 at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2856)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2792)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2791)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2791)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1247)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1247)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1247)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3060)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2994)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2983)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:989)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2398)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2419)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2438)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2463)
	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1049)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:410)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:1048)
	at org.apache.spark.rdd.PairRDDFunctions.$anonfun$collectAsMap$1(PairRDDFunctions.scala:738)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:410)
	at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:737)
	at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:663)
	at org.apache.uniffle.test.RepartitionTest.repartitionApp(RepartitionTest.java:99)
	at org.apache.uniffle.test.RepartitionTest.runTest(RepartitionTest.java:49)
	at org.apache.uniffle.test.SparkIntegrationTestBase.runSparkApp(SparkIntegrationTestBase.java:102)
	at org.apache.uniffle.test.RepartitionWithLocalFileRssTest.run(RepartitionWithLocalFileRssTest.java:100)
	at org.apache.uniffle.test.RepartitionTest.resultCompareTest(RepartitionTest.java:44)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.util.ArrayList.forEach(ArrayList.java:1259)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:30:37.699] [main] [INFO] MiniDFSCluster.<init> - starting cluster: numNameNodes=1, numDataNodes=1
Formatting using clusterid: testClusterID
[2024-09-29 11:30:37.700] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:30:37.700] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:30:37.700] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:30:37.700] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:30:37.701] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:30:37.701] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:30:37.701] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:30:37.701] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:30:37
[2024-09-29 11:30:37.701] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:30:37.702] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.702] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:30:37.702] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:30:37.703] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:30:37.704] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:30:37.704] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:30:37.705] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:30:37.705] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:30:37.705] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:30:37.705] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:30:37.705] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:30:37.705] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.706] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:30:37.706] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:30:37.707] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:30:37.707] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:30:37.707] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:30:37.707] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:30:37.707] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.707] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:30:37.707] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:30:37.708] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:30:37.708] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:30:37.708] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:30:37.708] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:30:37.708] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:30:37.709] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:30:37.709] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:30:37.709] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:30:37.709] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:30:37.709] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.709] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:30:37.710] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:30:37.710] [main] [INFO] FSImage.format - Allocated new BlockPoolId: BP-683435946-127.0.0.1-1727609437710
[2024-09-29 11:30:37.712] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1 has been successfully formatted.
[2024-09-29 11:30:37.714] [main] [INFO] Storage.format - Storage directory /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2 has been successfully formatted.
[2024-09-29 11:30:37.714] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:30:37.714] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Saving image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2/current/fsimage.ckpt_0000000000000000000 using no compression
[2024-09-29 11:30:37.720] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:30:37.721] [FSImageSaver for /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1 of type IMAGE_AND_EDITS] [INFO] FSImageFormatProtobuf.save - Image file /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
[2024-09-29 11:30:37.722] [main] [INFO] NNStorageRetentionManager.getImageTxIdToRetain - Going to retain 1 images with txid >= 0
[2024-09-29 11:30:37.723] [main] [INFO] NameNode.createNameNode - createNameNode []
[2024-09-29 11:30:37.724] [main] [WARN] MetricsConfig.loadFirst - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
[2024-09-29 11:30:37.724] [main] [INFO] MetricsSystemImpl.startTimer - Scheduled Metric snapshot period at 10 second(s).
[2024-09-29 11:30:37.724] [main] [INFO] MetricsSystemImpl.start - NameNode metrics system started
[2024-09-29 11:30:37.725] [main] [INFO] NameNode.setClientNamenodeAddress - fs.defaultFS is hdfs://127.0.0.1:0
[2024-09-29 11:30:37.727] [org.apache.hadoop.util.JvmPauseMonitor$Monitor@58477355] [INFO] JvmPauseMonitor.run - Starting JVM pause monitor
[2024-09-29 11:30:37.727] [main] [INFO] DFSUtil.httpServerTemplateForNNAndJN - Starting Web-server for hdfs at: http://localhost:0
[2024-09-29 11:30:37.727] [main] [INFO] AuthenticationFilter.constructSecretProvider - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
[2024-09-29 11:30:37.728] [main] [WARN] HttpRequestLog.getRequestLog - Jetty request log can only be enabled using Log4j
[2024-09-29 11:30:37.728] [main] [INFO] HttpServer2.addGlobalFilter - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
[2024-09-29 11:30:37.728] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
[2024-09-29 11:30:37.728] [main] [INFO] HttpServer2.addFilter - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
[2024-09-29 11:30:37.729] [main] [INFO] HttpServer2.initWebHdfs - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
[2024-09-29 11:30:37.729] [main] [INFO] HttpServer2.addJerseyResourcePackage - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
[2024-09-29 11:30:37.730] [main] [INFO] HttpServer2.openListeners - Jetty bound to port 36489
[2024-09-29 11:30:37.730] [main] [INFO] log.info - jetty-6.1.26
[2024-09-29 11:30:37.734] [main] [INFO] log.info - Extract jar:file:/home/runner/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.8.5/hadoop-hdfs-2.8.5-tests.jar!/webapps/hdfs to /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/Jetty_localhost_36489_hdfs____3gvyjg/webapp
[2024-09-29 11:30:37.800] [main] [INFO] log.info - Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36489
[2024-09-29 11:30:37.801] [main] [INFO] FSEditLog.newInstance - Edit logging is async:true
[2024-09-29 11:30:37.801] [main] [INFO] FSNamesystem.<init> - KeyProvider: null
[2024-09-29 11:30:37.801] [main] [INFO] FSNamesystem.<init> - fsLock is fair: true
[2024-09-29 11:30:37.801] [main] [INFO] FSNamesystem.<init> - Detailed lock hold time metrics enabled: false
[2024-09-29 11:30:37.802] [main] [INFO] DatanodeManager.<init> - dfs.block.invalidate.limit=1000
[2024-09-29 11:30:37.802] [main] [INFO] DatanodeManager.<init> - dfs.namenode.datanode.registration.ip-hostname-check=true
[2024-09-29 11:30:37.802] [main] [INFO] BlockManager.printBlockDeletionTime - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
[2024-09-29 11:30:37.802] [main] [INFO] BlockManager.printBlockDeletionTime - The block deletion will start around 2024 Sep 29 11:30:37
[2024-09-29 11:30:37.802] [main] [INFO] GSet.computeCapacity - Computing capacity for map BlocksMap
[2024-09-29 11:30:37.803] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.803] [main] [INFO] GSet.computeCapacity - 2.0% max memory 4.4 GB = 91.0 MB
[2024-09-29 11:30:37.803] [main] [INFO] GSet.computeCapacity - capacity      = 2^24 = 16777216 entries
[2024-09-29 11:30:37.804] [main] [INFO] BlockManager.createBlockTokenSecretManager - dfs.block.access.token.enable=false
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - defaultReplication         = 1
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - maxReplication             = 512
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - minReplication             = 1
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - maxReplicationStreams      = 2
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - replicationRecheckInterval = 3000
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - encryptDataTransfer        = false
[2024-09-29 11:30:37.805] [main] [INFO] BlockManager.<init> - maxNumBlocksToLog          = 1000
[2024-09-29 11:30:37.805] [main] [INFO] FSNamesystem.<init> - fsOwner             = runner (auth:SIMPLE)
[2024-09-29 11:30:37.805] [main] [INFO] FSNamesystem.<init> - supergroup          = supergroup
[2024-09-29 11:30:37.805] [main] [INFO] FSNamesystem.<init> - isPermissionEnabled = true
[2024-09-29 11:30:37.805] [main] [INFO] FSNamesystem.<init> - HA Enabled: false
[2024-09-29 11:30:37.805] [main] [INFO] FSNamesystem.<init> - Append Enabled: true
[2024-09-29 11:30:37.806] [main] [INFO] GSet.computeCapacity - Computing capacity for map INodeMap
[2024-09-29 11:30:37.806] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.806] [main] [INFO] GSet.computeCapacity - 1.0% max memory 4.4 GB = 45.5 MB
[2024-09-29 11:30:37.806] [main] [INFO] GSet.computeCapacity - capacity      = 2^23 = 8388608 entries
[2024-09-29 11:30:37.807] [main] [INFO] FSDirectory.<init> - ACLs enabled? false
[2024-09-29 11:30:37.807] [main] [INFO] FSDirectory.<init> - XAttrs enabled? true
[2024-09-29 11:30:37.807] [main] [INFO] NameNode.<init> - Caching file names occurring more than 10 times
[2024-09-29 11:30:37.807] [main] [INFO] GSet.computeCapacity - Computing capacity for map cachedBlocks
[2024-09-29 11:30:37.807] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.807] [main] [INFO] GSet.computeCapacity - 0.25% max memory 4.4 GB = 11.4 MB
[2024-09-29 11:30:37.807] [main] [INFO] GSet.computeCapacity - capacity      = 2^21 = 2097152 entries
[2024-09-29 11:30:37.808] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
[2024-09-29 11:30:37.808] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.min.datanodes = 0
[2024-09-29 11:30:37.808] [main] [INFO] FSNamesystem.<init> - dfs.namenode.safemode.extension     = 0
[2024-09-29 11:30:37.808] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.window.num.buckets = 10
[2024-09-29 11:30:37.808] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.num.users = 10
[2024-09-29 11:30:37.808] [main] [INFO] TopMetrics.logConf - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
[2024-09-29 11:30:37.808] [main] [INFO] FSNamesystem.initRetryCache - Retry cache on namenode is enabled
[2024-09-29 11:30:37.808] [main] [INFO] FSNamesystem.initRetryCache - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
[2024-09-29 11:30:37.808] [main] [INFO] GSet.computeCapacity - Computing capacity for map NameNodeRetryCache
[2024-09-29 11:30:37.808] [main] [INFO] GSet.computeCapacity - VM type       = 64-bit
[2024-09-29 11:30:37.808] [main] [INFO] GSet.computeCapacity - 0.029999999329447746% max memory 4.4 GB = 1.4 MB
[2024-09-29 11:30:37.808] [main] [INFO] GSet.computeCapacity - capacity      = 2^17 = 131072 entries
[2024-09-29 11:30:37.810] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1/in_use.lock acquired by nodename 11847@action-host
[2024-09-29 11:30:37.811] [main] [INFO] Storage.tryLock - Lock on /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2/in_use.lock acquired by nodename 11847@action-host
[2024-09-29 11:30:37.811] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1/current
[2024-09-29 11:30:37.812] [main] [INFO] FileJournalManager.recoverUnfinalizedSegments - Recovering unfinalized segments in /home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name2/current
[2024-09-29 11:30:37.812] [main] [INFO] FSImage.loadFSImage - No edit log streams selected.
[2024-09-29 11:30:37.812] [main] [INFO] FSImage.loadFSImageFile - Planning to load image: FSImageFile(file=/home/runner/work/incubator-uniffle/incubator-uniffle/integration-test/spark-common/target/tmp/junit203744527383740182/name1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
[2024-09-29 11:30:37.812] [main] [INFO] FSImageFormatPBINode.loadINodeSec…ross 5 partitions as shuffle result for task appId[local-1727609493963_1727609493938], shuffleId[2], taskAttemptId[16]
[2024-09-29 11:31:43.107] [Grpc-6] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=reportShuffleResult	statusCode=SUCCESS	from=/10.1.0.22:48710	executionTimeUs=158	appId=local-1727609493963_1727609493938	shuffleId=2	args{taskAttemptId=16, bitmapNum=1, partitionToBlockIdsSize=5}	context{updatedBlockCount=5, expectedBlockCount=5}
[2024-09-29 11:31:43.107] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] ShuffleWriteClientImpl.reportShuffleResult - Report shuffle result to ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]} for appId[local-1727609493963_1727609493938], shuffleId[2] successfully
[2024-09-29 11:31:43.107] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] RssShuffleWriter.stop - Report shuffle result for task[16] with bitmapNum[1] cost 1 ms
[2024-09-29 11:31:43.107] [Executor task launch worker for task 4.0 in stage 5.0 (TID 18)] [INFO] Executor.logInfo - Finished task 4.0 in stage 5.0 (TID 18). 1873 bytes result sent to driver
[2024-09-29 11:31:43.108] [task-result-getter-2] [INFO] TaskSetManager.logInfo - Finished task 4.0 in stage 5.0 (TID 18) in 1686 ms on fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net (executor driver) (5/5)
[2024-09-29 11:31:43.108] [task-result-getter-2] [INFO] TaskSchedulerImpl.logInfo - Removed TaskSet 5.0, whose tasks have all completed, from pool 
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ShuffleMapStage 5 (reduceByKey at RepartitionTest.java:98) finished in 2.396 s
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - looking for newly runnable stages
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - running: Set()
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - waiting: Set(ResultStage 6)
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - failed: Set()
[2024-09-29 11:31:43.108] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting ResultStage 6 (ShuffledRDD[14] at sortByKey at RepartitionTest.java:99), which has no missing parents
[2024-09-29 11:31:43.110] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_8 stored as values in memory (estimated size 6.2 KiB, free 2.5 GiB)
[2024-09-29 11:31:43.110] [dag-scheduler-event-loop] [INFO] MemoryStore.logInfo - Block broadcast_8_piece0 stored as bytes in memory (estimated size 3.7 KiB, free 2.5 GiB)
[2024-09-29 11:31:43.111] [dispatcher-BlockManagerMaster] [INFO] BlockManagerInfo.logInfo - Added broadcast_8_piece0 in memory on fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net:37339 (size: 3.7 KiB, free: 2.5 GiB)
[2024-09-29 11:31:43.111] [dag-scheduler-event-loop] [INFO] SparkContext.logInfo - Created broadcast 8 from broadcast at DAGScheduler.scala:1585
[2024-09-29 11:31:43.111] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - Submitting 5 missing tasks from ResultStage 6 (ShuffledRDD[14] at sortByKey at RepartitionTest.java:99) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4))
[2024-09-29 11:31:43.111] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Adding task set 6.0 with 5 tasks resource profile 0
[2024-09-29 11:31:43.112] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 0.0 in stage 6.0 (TID 19) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net, executor driver, partition 0, ANY, 7433 bytes) 
[2024-09-29 11:31:43.112] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net, executor driver, partition 1, ANY, 7433 bytes) 
[2024-09-29 11:31:43.112] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 2.0 in stage 6.0 (TID 21) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net, executor driver, partition 2, ANY, 7433 bytes) 
[2024-09-29 11:31:43.112] [dispatcher-event-loop-3] [INFO] TaskSetManager.logInfo - Starting task 3.0 in stage 6.0 (TID 22) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net, executor driver, partition 3, ANY, 7433 bytes) 
[2024-09-29 11:31:43.113] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] Executor.logInfo - Running task 0.0 in stage 6.0 (TID 19)
[2024-09-29 11:31:43.113] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] Executor.logInfo - Running task 2.0 in stage 6.0 (TID 21)
[2024-09-29 11:31:43.113] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] Executor.logInfo - Running task 3.0 in stage 6.0 (TID 22)
[2024-09-29 11:31:43.113] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] Executor.logInfo - Running task 1.0 in stage 6.0 (TID 20)
[2024-09-29 11:31:43.114] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[1, 2]
[2024-09-29 11:31:43.114] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[2, 3]
[2024-09-29 11:31:43.114] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[3, 4]
[2024-09-29 11:31:43.115] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.22:48710	executionTimeUs=172	appId=local-1727609493963_1727609493938	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:31:43.115] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReader - Get taskId cost 0 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[0, 1]
[2024-09-29 11:31:43.115] [Grpc-1] [ERROR] ShuffleServerGrpcService.getShuffleResultForMultiPart - Error happened when get shuffle result for appId[local-1727609493963_1727609493938], shuffleId[2], partitions[1]
java.lang.ArrayIndexOutOfBoundsException: 0
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureOne(Roaring64NavigableMap.java:676)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.ensureCumulatives(Roaring64NavigableMap.java:567)
	at org.roaringbitmap.longlong.Roaring64NavigableMap.getLongCardinality(Roaring64NavigableMap.java:278)
	at org.apache.uniffle.server.ShuffleTaskManager.lambda$getFinishedBlockIds$9(ShuffleTaskManager.java:657)
	at java.util.Optional.ifPresent(Optional.java:159)
	at org.apache.uniffle.server.ShuffleTaskManager.getFinishedBlockIds(ShuffleTaskManager.java:652)
	at org.apache.uniffle.server.ShuffleServerGrpcService.getShuffleResultForMultiPart(ShuffleServerGrpcService.java:985)
	at org.apache.uniffle.proto.ShuffleServerGrpc$MethodHandlers.invoke(ShuffleServerGrpc.java:1180)
	at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.uniffle.common.rpc.ClientContextServerInterceptor$1.onHalfClose(ClientContextServerInterceptor.java:63)
	at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:356)
	at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:861)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:31:43.115] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.22:48710	executionTimeUs=95	appId=local-1727609493963_1727609493938	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:31:43.115] [Grpc-1] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=INTERNAL_ERROR	from=/10.1.0.22:48710	executionTimeUs=490	appId=local-1727609493963_1727609493938	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=0}
[2024-09-29 11:31:43.116] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 2 ms, and get 5 blockIds for shuffleId[2], startPartition[2], endPartition[3]
[2024-09-29 11:31:43.116] [Executor task launch worker for task 2.0 in stage 6.0 (TID 21)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:31:43.116] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [ERROR] ShuffleServerGrpcClient.getShuffleResultForMultiPart - Can't get shuffle result from 10.1.0.22:20004 for [appId=local-1727609493963_1727609493938, shuffleId=2, errorMsg:0
[2024-09-29 11:31:43.116] [Grpc-7] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.22:48710	executionTimeUs=100	appId=local-1727609493963_1727609493938	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:31:43.116] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [WARN] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Get shuffle result is failed from ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]} for appId[local-1727609493963_1727609493938], shuffleId[2], requestPartitions[1]
org.apache.uniffle.common.exception.RssFetchFailedException: Can't get shuffle result from 10.1.0.22:20004 for [appId=local-1727609493963_1727609493938, shuffleId=2, errorMsg:0
	at org.apache.uniffle.client.impl.grpc.ShuffleServerGrpcClient.getShuffleResultForMultiPart(ShuffleServerGrpcClient.java:913)
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:860)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:31:43.116] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [ERROR] ShuffleWriteClientImpl.getShuffleResultForMultiPart - Failed to meet replica requirement: PartitionDataReplicaRequirementTracking{shuffleId=2, inventory={0={0=[ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]}]}, 1={0=[ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]}]}, 2={0=[ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]}]}, 3={0=[ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]}]}, 4={0=[ShuffleServerInfo{host[10.1.0.22], grpc port[20004], netty port[21001]}]}}, succeedList={}}
[2024-09-29 11:31:43.119] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 4 ms, and get 5 blockIds for shuffleId[2], startPartition[3], endPartition[4]
[2024-09-29 11:31:43.119] [Executor task launch worker for task 3.0 in stage 6.0 (TID 22)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:31:43.119] [Executor task launch worker for task 1.0 in stage 6.0 (TID 20)] [ERROR] Executor.logError - Exception in task 1.0 in stage 6.0 (TID 20)
org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
[2024-09-29 11:31:43.125] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 10 ms, and get 5 blockIds for shuffleId[2], startPartition[0], endPartition[1]
[2024-09-29 11:31:43.125] [Executor task launch worker for task 0.0 in stage 6.0 (TID 19)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:31:43.133] [dispatcher-event-loop-1] [INFO] TaskSetManager.logInfo - Starting task 4.0 in stage 6.0 (TID 23) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net, executor driver, partition 4, ANY, 7433 bytes) 
[2024-09-29 11:31:43.133] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] Executor.logInfo - Running task 4.0 in stage 6.0 (TID 23)
[2024-09-29 11:31:43.135] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReader - Get taskId cost 1 ms, and request expected blockIds from 5 tasks for shuffleId[2], partitionId[4, 5]
[2024-09-29 11:31:43.135] [task-result-getter-3] [WARN] TaskSetManager.logWarning - Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

[2024-09-29 11:31:43.135] [Grpc-0] [INFO] SHUFFLE_SERVER_RPC_AUDIT_LOG.close - cmd=getShuffleResultForMultiPart	statusCode=SUCCESS	from=/10.1.0.22:48710	executionTimeUs=170	appId=local-1727609493963_1727609493938	shuffleId=2	args{partitionsListSize=1, blockIdLayout=blockIdLayout[seq: 21 bits, part: 20 bits, task: 22 bits]}	return{serializedBlockIdsBytes=35}	context{bitmap[0].<size,byte>=<25,172>, partitionBlockCount=5}
[2024-09-29 11:31:43.136] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReaderImpl - Get shuffle blockId cost 1 ms, and get 5 blockIds for shuffleId[2], startPartition[4], endPartition[5]
[2024-09-29 11:31:43.136] [task-result-getter-3] [ERROR] TaskSetManager.logError - Task 1 in stage 6.0 failed 1 times; aborting job
[2024-09-29 11:31:43.136] [Executor task launch worker for task 4.0 in stage 6.0 (TID 23)] [INFO] RssShuffleManager.getReaderImpl - Shuffle reader using remote storage Empty Remote Storage
[2024-09-29 11:31:43.138] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Cancelling stage 6
[2024-09-29 11:31:43.138] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Killing all running tasks in stage 6: Stage cancelled: Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.141] [dispatcher-event-loop-0] [INFO] Executor.logInfo - Executor is trying to kill task 0.0 in stage 6.0 (TID 19), reason: Stage cancelled: Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.141] [dispatcher-event-loop-0] [INFO] Executor.logInfo - Executor is trying to kill task 2.0 in stage 6.0 (TID 21), reason: Stage cancelled: Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.141] [dispatcher-event-loop-0] [INFO] Executor.logInfo - Executor is trying to kill task 3.0 in stage 6.0 (TID 22), reason: Stage cancelled: Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.141] [dispatcher-event-loop-0] [INFO] Executor.logInfo - Executor is trying to kill task 4.0 in stage 6.0 (TID 23), reason: Stage cancelled: Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.141] [dag-scheduler-event-loop] [INFO] TaskSchedulerImpl.logInfo - Stage 6 was cancelled
[2024-09-29 11:31:43.142] [dag-scheduler-event-loop] [INFO] DAGScheduler.logInfo - ResultStage 6 (collectAsMap at RepartitionTest.java:99) failed in 0.032 s due to Job aborted due to stage failure: Task 1 in stage 6.0 failed 1 times, most recent failure: Lost task 1.0 in stage 6.0 (TID 20) (fv-az1775-801.c4xsp1s2bbhedknm5zgr0kmooe.cx.internal.cloudapp.net executor driver): org.apache.uniffle.common.exception.RssFetchFailedException: Get shuffle result is failed for appId[local-1727609493963_1727609493938], shuffleId[2]
	at org.apache.uniffle.client.impl.ShuffleWriteClientImpl.getShuffleResultForMultiPart(ShuffleWriteClientImpl.java:889)
	at org.apache.spark.shuffle.RssShuffleManager.getShuffleResultForMultiPart(RssShuffleManager.java:1031)
	at org.apache.spark.shuffle.RssShuffleManager.getReaderImpl(RssShuffleManager.java:653)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:586)
	at org.apache.spark.shuffle.RssShuffleManager.getReader(RssShuffleManager.java:558)
	at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:106)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
	at org.apache.spark.scheduler.Task.run(Task.scala:141)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
	at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
[2024-09-29 11:31:43.143] [main] [INFO] DAGScheduler.logInfo - Job 1 failed: collectAsMap at RepartitionTest.java:99, took 2.437741 s

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Test Results

2 skipped tests found

There are 2 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
org.apache.uniffle.test.AccessClusterTest ‑ org.apache.uniffle.test.AccessClusterTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ rpcMetricsTest

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Test Results

1027 tests found (test 1 to 734)

There are 1027 tests, see "Raw output" for the list of tests 1 to 734.
Raw output
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testCombineBuffer
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testCommitBlocksWhenMemoryShuffleDisabled
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testOnePartition
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteException
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormal
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormalWithRemoteMerge
org.apache.hadoop.mapred.SortWriteBufferManagerTest ‑ testWriteNormalWithRemoteMergeAndCombine
org.apache.hadoop.mapred.SortWriteBufferTest ‑ testReadWrite
org.apache.hadoop.mapred.SortWriteBufferTest ‑ testSortBufferIterator
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ applyDynamicClientConfTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ baskAttemptIdTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ blockConvertTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ partitionIdConvertBlockTest
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testEstimateTaskConcurrency
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.hadoop.mapreduce.RssMRUtilsTest ‑ testValidateRssClientConf
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ extraEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ missingEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ multiPassEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ obsoletedAndTipFailedEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ singlePassEventFetch
org.apache.hadoop.mapreduce.task.reduce.EventFetcherTest ‑ singlePassWithRepeatedSuccessEventFetch
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ testCodecIsDuplicated
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataMergeFailsTestWithRss
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataTestWithRss
org.apache.hadoop.mapreduce.task.reduce.FetcherTest ‑ writeAndReadDataTestWithoutRss
org.apache.hadoop.mapreduce.task.reduce.RMRssShuffleTest ‑ testReadShuffleWithCombine
org.apache.hadoop.mapreduce.task.reduce.RMRssShuffleTest ‑ testReadShuffleWithoutCombine
org.apache.hadoop.mapreduce.task.reduce.RssInMemoryRemoteMergerTest ‑ mergerTest{File}
org.apache.hadoop.mapreduce.task.reduce.RssRemoteMergeManagerTest ‑ mergerTest{File}
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateFallback
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInDriver
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInDriverDenied
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testCreateInExecutor
org.apache.spark.shuffle.DelegationRssShuffleManagerTest ‑ testTryAccessCluster
org.apache.spark.shuffle.FunctionUtilsTests ‑ testOnceFunction0
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testCreateShuffleManagerServer
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testGetDataDistributionType
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerInterface
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[1]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[2]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testRssShuffleManagerRegisterShuffle{int}[3]
org.apache.spark.shuffle.RssShuffleManagerTest ‑ testWithStageRetry
org.apache.spark.shuffle.RssSpark2ShuffleUtilsTest ‑ testCreateFetchFailedException
org.apache.spark.shuffle.RssSpark2ShuffleUtilsTest ‑ testIsStageResubmitSupported
org.apache.spark.shuffle.RssSpark3ShuffleUtilsTest ‑ testCreateFetchFailedException
org.apache.spark.shuffle.RssSpark3ShuffleUtilsTest ‑ testIsStageResubmitSupported
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ applyDynamicClientConfTest
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ odfsConfigurationTest
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testAssignmentTags
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testEstimateTaskConcurrency
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.spark.shuffle.RssSparkShuffleUtilsTest ‑ testValidateRssClientConf
org.apache.spark.shuffle.SparkVersionUtilsTest ‑ testSpark3Version
org.apache.spark.shuffle.SparkVersionUtilsTest ‑ testSparkVersion
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testCreatePartitionReplicaTracking
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testListAllPartitionAssignmentServers
org.apache.spark.shuffle.handle.MutableShuffleHandleInfoTest ‑ testUpdateAssignment
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ cleanup
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest1{BlockIdLayout}[1]
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest1{BlockIdLayout}[2]
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest2
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest3
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest4
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest5
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTest7
org.apache.spark.shuffle.reader.RssShuffleDataIteratorTest ‑ readTestUncompressedShuffle
org.apache.spark.shuffle.reader.RssShuffleReaderTest ‑ readTest
org.apache.spark.shuffle.writer.DataPusherTest ‑ testSendData
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ blockFailureResendTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ checkBlockSendResultTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ dataConsistencyWhenSpillTriggeredTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ postBlockEventTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ reassignMultiTimesForOnePartitionIdTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ refreshAssignmentTest
org.apache.spark.shuffle.writer.RssShuffleWriterTest ‑ writeTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addFirstRecordWithLargeSizeTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addHugeRecordTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addNullValueRecordTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addPartitionDataTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordCompressedTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordCompressedTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordUnCompressedTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ addRecordUnCompressedTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ buildBlockEventsTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ createBlockIdTest{BlockIdLayout}[1]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ createBlockIdTest{BlockIdLayout}[2]
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOthersTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOwnTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillByOwnWithSparkTaskMemoryManagerTest
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ spillPartial
org.apache.spark.shuffle.writer.WriteBufferManagerTest ‑ testClearWithSpillRatio
org.apache.spark.shuffle.writer.WriteBufferTest ‑ test
org.apache.tez.common.GetShuffleServerRequestTest ‑ testSerDe
org.apache.tez.common.GetShuffleServerResponseTest ‑ testSerDe
org.apache.tez.common.IdUtilsTest ‑ testConvertTezTaskAttemptID
org.apache.tez.common.InputContextUtilsTest ‑ testGetTezTaskAttemptID
org.apache.tez.common.RssTezUtilsTest ‑ attemptTaskIdTest
org.apache.tez.common.RssTezUtilsTest ‑ baskAttemptIdTest
org.apache.tez.common.RssTezUtilsTest ‑ blockConvertTest
org.apache.tez.common.RssTezUtilsTest ‑ testApplyDynamicClientConf
org.apache.tez.common.RssTezUtilsTest ‑ testComputeShuffleId
org.apache.tez.common.RssTezUtilsTest ‑ testEstimateTaskConcurrency
org.apache.tez.common.RssTezUtilsTest ‑ testFilterRssConf
org.apache.tez.common.RssTezUtilsTest ‑ testGetRequiredShuffleServerNumber
org.apache.tez.common.RssTezUtilsTest ‑ testParseDagId
org.apache.tez.common.RssTezUtilsTest ‑ testParseRssWorker
org.apache.tez.common.RssTezUtilsTest ‑ testPartitionIdConvertBlock
org.apache.tez.common.RssTezUtilsTest ‑ testTaskIdStrToTaskId
org.apache.tez.common.ShuffleAssignmentsInfoWritableTest ‑ testSerDe
org.apache.tez.common.TezIdHelperTest ‑ testTetTaskAttemptId
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testDagStateChangeCallback
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromCoordinator{String}[1]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromCoordinator{String}[2]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromDynamicConf{String}[1]
org.apache.tez.dag.app.RssDAGAppMasterTest ‑ testFetchRemoteStorageFromDynamicConf{String}[2]
org.apache.tez.dag.app.TezRemoteShuffleManagerTest ‑ testTezRemoteShuffleManager
org.apache.tez.dag.app.TezRemoteShuffleManagerTest ‑ testTezRemoteShuffleManagerSecure
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testFetchFailed
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testProgressWithEmptyPendingHosts
org.apache.tez.runtime.library.common.shuffle.impl.RssShuffleManagerTest ‑ testUseSharedExecutor
org.apache.tez.runtime.library.common.shuffle.impl.RssSimpleFetchedInputAllocatorTest ‑ testAllocate{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ testReadWithDiskFetchedInput{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ testReadWithRemoteFetchedInput{File}
org.apache.tez.runtime.library.common.shuffle.impl.RssTezFetcherTest ‑ writeAndReadDataTestWithoutRss
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssInMemoryMergerTest ‑ mergerTest
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssMergeManagerTest ‑ mergerTest
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testPenalty
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testProgressDuringGetHostWait
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth1
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth2
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth3
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth4
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth5
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth6
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testReducerHealth7
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleSchedulerTest ‑ testShutdownWithInterrupt
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleTest ‑ testKillSelf
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssShuffleTest ‑ testSchedulerTerminatesOnException
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testCalcChecksum
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWrite
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWriteDiskFetchInput{File}
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezBypassWriterTest ‑ testWriteRemoteFetchInput
org.apache.tez.runtime.library.common.shuffle.orderedgrouped.RssTezShuffleDataFetcherTest ‑ testIteratorWithInMemoryReader
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testCommitBlocksWhenMemoryShuffleDisabled{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testFailFastWhenFailedToSendBlocks{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testWriteException{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferManagerTest ‑ testWriteNormal{File}
org.apache.tez.runtime.library.common.sort.buffer.WriteBufferTest ‑ testReadWrite
org.apache.tez.runtime.library.common.sort.impl.RssSorterTest ‑ testCollectAndRecordsPerPartition
org.apache.tez.runtime.library.common.sort.impl.RssTezPerPartitionRecordTest ‑ testNumPartitions
org.apache.tez.runtime.library.common.sort.impl.RssTezPerPartitionRecordTest ‑ testRssTezIndexHasData
org.apache.tez.runtime.library.common.sort.impl.RssUnSorterTest ‑ testCollectAndRecordsPerPartition
org.apache.tez.runtime.library.input.RssOrderedGroupedKVInputTest ‑ testInterruptWhileAwaitingInput
org.apache.tez.runtime.library.input.RssSortedGroupedMergedInputTest ‑ testSimpleConcatenatedMergedKeyValueInput
org.apache.tez.runtime.library.input.RssSortedGroupedMergedInputTest ‑ testSimpleConcatenatedMergedKeyValuesInput
org.apache.tez.runtime.library.output.RssOrderedPartitionedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssOrderedPartitionedKVOutputTest ‑ testNonStartedOutput
org.apache.tez.runtime.library.output.RssUnorderedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssUnorderedKVOutputTest ‑ testNonStartedOutput
org.apache.tez.runtime.library.output.RssUnorderedPartitionedKVOutputTest ‑ testClose
org.apache.tez.runtime.library.output.RssUnorderedPartitionedKVOutputTest ‑ testNonStartedOutput
org.apache.uniffle.cli.AdminRestApiTest ‑ testRunRefreshAccessChecker
org.apache.uniffle.cli.CLIContentUtilsTest ‑ testTableFormat
org.apache.uniffle.cli.UniffleTestAdminCLI ‑ testAdminRefreshCLI
org.apache.uniffle.cli.UniffleTestAdminCLI ‑ testMissingClientCLI
org.apache.uniffle.cli.UniffleTestCLI ‑ testExampleCLI
org.apache.uniffle.cli.UniffleTestCLI ‑ testHelp
org.apache.uniffle.client.ClientUtilsTest ‑ testGenerateTaskIdBitMap
org.apache.uniffle.client.ClientUtilsTest ‑ testGetMaxAttemptNo
org.apache.uniffle.client.ClientUtilsTest ‑ testGetNumberOfSignificantBits
org.apache.uniffle.client.ClientUtilsTest ‑ testValidateClientType
org.apache.uniffle.client.ClientUtilsTest ‑ testWaitUntilDoneOrFail
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testMultipleReplicaWithMultiServers
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testMultipleReplicaWithSingleServer
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testSingleReplicaWithMultiServers
org.apache.uniffle.client.PartitionDataReplicaRequirementTrackingTest ‑ testSingleReplicaWithSingleShuffleServer
org.apache.uniffle.client.factory.ShuffleManagerClientFactoryTest ‑ createShuffleManagerClient
org.apache.uniffle.client.impl.FailedBlockSendTrackerTest ‑ test
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest1
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest10
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest11
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest12
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest13
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest13b
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest14
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest15
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest16
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest2
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest3
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest4
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest5
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest7
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest8
org.apache.uniffle.client.impl.ShuffleReadClientImplTest ‑ readTest9
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testAbandonEventWhenTaskFailed
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testGetShuffleResult{BlockIdLayout}[1]
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testGetShuffleResult{BlockIdLayout}[2]
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testRegisterAndUnRegisterShuffleServer
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSendData
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSendDataWithDefectiveServers
org.apache.uniffle.client.impl.ShuffleWriteClientImplTest ‑ testSettingRssClientConfigs
org.apache.uniffle.client.record.reader.BufferedSegmentTest ‑ testMergeResolvedSegmentWithHook
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testNormalReadWithoutCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithCombine{String}[2]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[1]
org.apache.uniffle.client.record.reader.RMRecordsReaderTest ‑ testReadMulitPartitionWithoutCombine{String}[2]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortAndSerializeRecords{String}[1]
org.apache.uniffle.client.record.writer.RecordCollectionTest ‑ testSortCombineAndSerializeRecords{String}[1]
org.apache.uniffle.client.shuffle.MRCombinerTest ‑ testMRCombiner
org.apache.uniffle.client.shuffle.RecordCollectorTest ‑ testRecordCollector
org.apache.uniffle.common.ArgumentsTest ‑ argEmptyTest
org.apache.uniffle.common.ArgumentsTest ‑ argTest
org.apache.uniffle.common.BufferSegmentTest ‑ testEquals
org.apache.uniffle.common.BufferSegmentTest ‑ testGetOffset
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[1]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[2]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[3]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[4]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[5]
org.apache.uniffle.common.BufferSegmentTest ‑ testNotEquals{long, long, int, int, long, long}[6]
org.apache.uniffle.common.BufferSegmentTest ‑ testToString
org.apache.uniffle.common.PartitionRangeTest ‑ testCompareTo
org.apache.uniffle.common.PartitionRangeTest ‑ testEquals
org.apache.uniffle.common.PartitionRangeTest ‑ testHashCode
org.apache.uniffle.common.PartitionRangeTest ‑ testPartitionRange
org.apache.uniffle.common.PartitionRangeTest ‑ testToString
org.apache.uniffle.common.ReconfigurableConfManagerTest ‑ test
org.apache.uniffle.common.ReconfigurableConfManagerTest ‑ testWithoutInitialization
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEmptyStoragePath{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEmptyStoragePath{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testEquals
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testHashCode
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testNotEquals{String}[3]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testRemoteStorageInfo{String, Map, String}[3]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[1]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[2]
org.apache.uniffle.common.RemoteStorageInfoTest ‑ testUncommonConfString{String}[3]
org.apache.uniffle.common.ServerStatusTest ‑ test
org.apache.uniffle.common.ShuffleBlockInfoTest ‑ testToString
org.apache.uniffle.common.ShuffleDataResultTest ‑ testEmpty
org.apache.uniffle.common.ShuffleIndexResultTest ‑ testEmpty
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ shufflePartitionedBlockTest
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testEquals
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[1]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[2]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[3]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testNotEquals{int, long, long, int}[4]
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testSize
org.apache.uniffle.common.ShufflePartitionedBlockTest ‑ testToString
org.apache.uniffle.common.ShufflePartitionedDataTest ‑ testToString
org.apache.uniffle.common.ShuffleRegisterInfoTest ‑ testEquals
org.apache.uniffle.common.ShuffleRegisterInfoTest ‑ testToString
org.apache.uniffle.common.ShuffleServerInfoTest ‑ testEquals
org.apache.uniffle.common.ShuffleServerInfoTest ‑ testToString
org.apache.uniffle.common.UnionKeyTest ‑ test
org.apache.uniffle.common.compression.CompressionTest ‑ checkDecompressBufferOffsets
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[10]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[11]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[12]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[13]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[14]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[15]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[16]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[17]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[18]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[19]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[1]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[20]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[21]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[22]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[23]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[24]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[2]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[3]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[4]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[5]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[6]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[7]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[8]
org.apache.uniffle.common.compression.CompressionTest ‑ testCompression{int, Type}[9]
org.apache.uniffle.common.config.ConfigOptionTest ‑ testBasicTypes
org.apache.uniffle.common.config.ConfigOptionTest ‑ testDeprecatedAndFallbackKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testDeprecatedKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testEnumType
org.apache.uniffle.common.config.ConfigOptionTest ‑ testFallbackKeys
org.apache.uniffle.common.config.ConfigOptionTest ‑ testListTypes
org.apache.uniffle.common.config.ConfigOptionTest ‑ testSetKVWithStringTypeDirectly
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToBoolean{Object, Boolean}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToDouble{Object, Double}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[14]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[15]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[16]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[17]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToFloat{Object, Float}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToInt{Object, Integer}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToLong{Object, Long}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[10]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[11]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[12]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[13]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[14]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToSizeInBytes{Object, long}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertToString{Object, String}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValueWithUnsupportedType
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testConvertValue{Object, Class}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testGetAllConfigOptions
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testNonNegativeLongValidator{long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPercentageDoubleValidator{double}[9]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator2{int}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[6]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[7]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveIntegerValidator{long}[8]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[1]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[2]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[3]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[4]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[5]
org.apache.uniffle.common.config.ConfigUtilsTest ‑ testPositiveLongValidator{long}[6]
org.apache.uniffle.common.config.RssConfTest ‑ testOptionWithDefault
org.apache.uniffle.common.config.RssConfTest ‑ testOptionWithNoDefault
org.apache.uniffle.common.config.RssConfTest ‑ testSetStringAndGetConcreteType
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testGetSecuredFilesystem
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testGetSecuredFilesystemButNotInitializeHadoopSecurityContext
org.apache.uniffle.common.filesystem.HadoopFilesystemProviderTest ‑ testWriteAndReadBySecuredFilesystem
org.apache.uniffle.common.future.CompletableFutureExtensionTest ‑ timeoutExceptionTest
org.apache.uniffle.common.merger.MergerTest ‑ testMergeSegmentToFile{String, File}[1]
org.apache.uniffle.common.metrics.MetricReporterFactoryTest ‑ testGetMetricReporter
org.apache.uniffle.common.metrics.MetricsManagerTest ‑ testMetricsManager
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ test
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ testParseGroupingKey
org.apache.uniffle.common.metrics.prometheus.PrometheusPushGatewayMetricReporterTest ‑ testParseIncompleteGroupingKey
org.apache.uniffle.common.netty.EncoderAndDecoderTest ‑ test
org.apache.uniffle.common.netty.TransportFrameDecoderTest ‑ testShouldRpcRequestsToBeReleased
org.apache.uniffle.common.netty.TransportFrameDecoderTest ‑ testShouldRpcResponsesToBeReleased
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientDiffPartition
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientDiffServer
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testClientReuse
org.apache.uniffle.common.netty.client.TransportClientFactoryTest ‑ testCreateClient
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleDataRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleDataResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleIndexRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetLocalShuffleIndexResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetMemoryShuffleDataRequest
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testGetMemoryShuffleDataResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testRpcResponse
org.apache.uniffle.common.netty.protocol.NettyProtocolTest ‑ testSendShuffleDataRequest
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile1{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile1{String, File}[2]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile2{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile2{String, File}[2]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile3{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile3{String, File}[2]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile4{String, File}[1]
org.apache.uniffle.common.records.RecordsReaderWriterTest ‑ testWriteAndReadRecordFile4{String, File}[2]
org.apache.uniffle.common.rpc.GrpcServerTest ‑ testGrpcExecutorPool
org.apache.uniffle.common.rpc.GrpcServerTest ‑ testRandomPort
org.apache.uniffle.common.rpc.StatusCodeTest ‑ test
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testCreateIllegalContext
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testSecuredCallable
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testSecuredDisableProxyUser
org.apache.uniffle.common.security.HadoopSecurityContextTest ‑ testWithOutKrb5Conf
org.apache.uniffle.common.security.SecurityContextFactoryTest ‑ testCreateHadoopSecurityContext
org.apache.uniffle.common.security.SecurityContextFactoryTest ‑ testDefaultSecurityContext
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[1]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[2]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testAvoidEOFException{int}[3]
org.apache.uniffle.common.segment.FixedSizeSegmentSplitterTest ‑ testSplit
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[1]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[2]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[3]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[4]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[5]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testConsistentWithFixSizeSplitterWhenNoSkew{int}[6]
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testDiscontinuousMapTaskIds
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testSplit
org.apache.uniffle.common.segment.LocalOrderSegmentSplitterTest ‑ testSplitForMergeContinuousSegments
org.apache.uniffle.common.serializer.PartialInputStreamTest ‑ testReadFileInputStream
org.apache.uniffle.common.serializer.PartialInputStreamTest ‑ testReadMemroyInputStream
org.apache.uniffle.common.serializer.PartialInputStreamTest ‑ testReadNullBytes
org.apache.uniffle.common.serializer.SerializerFactoryTest ‑ testGetSerializer
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues1{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues1{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues2{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues2{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues3{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues3{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues4{String, File}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeKeyValues4{String, File}[2]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeObject{Class}[1]
org.apache.uniffle.common.serializer.WritableSerializerTest ‑ testSerDeObject{Class}[2]
org.apache.uniffle.common.storage.StorageInfoUtilsTest ‑ testFromProto
org.apache.uniffle.common.storage.StorageInfoUtilsTest ‑ testToProto
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testEquals
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testFromLengths
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testFromLengthsErrors
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[1]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[2]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[3]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[4]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[5]
org.apache.uniffle.common.util.BlockIdLayoutTest ‑ testLayoutGetBlockId{BlockIdLayout}[6]
org.apache.uniffle.common.util.BlockIdTest ‑ testEquals
org.apache.uniffle.common.util.BlockIdTest ‑ testToString
org.apache.uniffle.common.util.ByteBufUtilsTest ‑ test
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32ByteBufferTest
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32TestWithByte
org.apache.uniffle.common.util.ChecksumUtilsTest ‑ crc32TestWithByteBuff
org.apache.uniffle.common.util.ExitUtilsTest ‑ test
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ stressingTestManySuppliers
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testAutoCloseable
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testCacheable
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testDelegateExtendClose
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testMultipleSupplierShouldNotInterfere
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testReClose
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testRenew
org.apache.uniffle.common.util.ExpiringCloseableSupplierTest ‑ testSerialization
org.apache.uniffle.common.util.JavaUtilsTest ‑ test
org.apache.uniffle.common.util.NettyUtilsTest ‑ test
org.apache.uniffle.common.util.RetryUtilsTest ‑ testRetry
org.apache.uniffle.common.util.RetryUtilsTest ‑ testRetryWithCondition
org.apache.uniffle.common.util.RssUtilsTest ‑ getMetricNameForHostNameTest
org.apache.uniffle.common.util.RssUtilsTest ‑ testCloneBitmap
org.apache.uniffle.common.util.RssUtilsTest ‑ testGenerateServerToPartitions
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetConfiguredLocalDirs
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetHostIp
org.apache.uniffle.common.util.RssUtilsTest ‑ testGetPropertiesFromFile
org.apache.uniffle.common.util.RssUtilsTest ‑ testLoadExtentions
org.apache.uniffle.common.util.RssUtilsTest ‑ testSerializeBitmap
org.apache.uniffle.common.util.RssUtilsTest ‑ testShuffleBitmapToPartitionBitmap{BlockIdLayout}[1]
org.apache.uniffle.common.util.RssUtilsTest ‑ testShuffleBitmapToPartitionBitmap{BlockIdLayout}[2]
org.apache.uniffle.common.util.RssUtilsTest ‑ testStartServiceOnPort
org.apache.uniffle.common.util.ThreadUtilsTest ‑ invokeAllTimeoutThreadPoolTest
org.apache.uniffle.common.util.ThreadUtilsTest ‑ shutdownThreadPoolTest
org.apache.uniffle.common.util.ThreadUtilsTest ‑ testExecuteTasksWithFutureHandler
org.apache.uniffle.common.util.ThreadUtilsTest ‑ testExecuteTasksWithFutureHandlerAndTimeout
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[10]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[11]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[12]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[13]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[14]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[15]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[16]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[17]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[18]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[19]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[1]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[20]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[21]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[22]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[23]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[24]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[25]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[26]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[27]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[28]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[2]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[3]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[4]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[5]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[6]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[7]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[8]
org.apache.uniffle.common.util.UnitConverterTest ‑ testByteString{Long, String, ByteUnit}[9]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[10]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[11]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[12]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[13]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[14]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[15]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[16]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[17]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[18]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[1]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[2]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[3]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[4]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[5]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[6]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[7]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[8]
org.apache.uniffle.common.util.UnitConverterTest ‑ testTimeString{Long, String, TimeUnit}[9]
org.apache.uniffle.common.web.JettyServerTest ‑ jettyServerStartTest
org.apache.uniffle.common.web.JettyServerTest ‑ jettyServerTest
org.apache.uniffle.coordinator.ApplicationManagerTest ‑ clearWithoutRemoteStorageTest
org.apache.uniffle.coordinator.ApplicationManagerTest ‑ refreshTest
org.apache.uniffle.coordinator.CoordinatorConfTest ‑ test
org.apache.uniffle.coordinator.CoordinatorServerTest ‑ test
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuota
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuotaMetrics
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testCheckQuotaWithDefault
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testDetectUserResource
org.apache.uniffle.coordinator.QuotaManagerTest ‑ testQuotaManagerWithoutAccessQuotaChecker
org.apache.uniffle.coordinator.ServerNodeTest ‑ compareTest
org.apache.uniffle.coordinator.ServerNodeTest ‑ testNettyPort
org.apache.uniffle.coordinator.ServerNodeTest ‑ testStorageInfoOfServerNode
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ excludeNodesNoDelayTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getLostServerListTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getServerListForNettyTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getServerListTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ getUnhealthyServerList
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ heartbeatTimeoutTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ startupSilentPeriodTest
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ testGetCorrectServerNodesWhenOneNodeRemoved
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ testGetCorrectServerNodesWhenOneNodeRemovedAndUnhealthyNodeFound
org.apache.uniffle.coordinator.SimpleClusterManagerTest ‑ updateExcludeNodesTest
org.apache.uniffle.coordinator.access.AccessManagerTest ‑ test
org.apache.uniffle.coordinator.checker.AccessCandidatesCheckerTest ‑ test{File}
org.apache.uniffle.coordinator.checker.AccessClusterLoadCheckerTest ‑ testAccessInfoRequiredShuffleServers
org.apache.uniffle.coordinator.checker.AccessClusterLoadCheckerTest ‑ testWhenAvailableServerThresholdSpecified
org.apache.uniffle.coordinator.checker.AccessQuotaCheckerTest ‑ testAccessInfoRequiredShuffleServers
org.apache.uniffle.coordinator.conf.DynamicClientConfServiceTest ‑ testByLegacyParser{File}
org.apache.uniffle.coordinator.conf.LegacyClientConfParserTest ‑ testParse
org.apache.uniffle.coordinator.conf.RssClientConfApplyManagerTest ‑ testBypassApply
org.apache.uniffle.coordinator.conf.RssClientConfApplyManagerTest ‑ testCustomizeApplyStrategy
org.apache.uniffle.coordinator.conf.YamlClientConfParserTest ‑ testFromFile
org.apache.uniffle.coordinator.conf.YamlClientConfParserTest ‑ testParse
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testAllMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testCoordinatorMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testCoordinatorMetricsWithNames
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testDynamicMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testGrpcMetrics
org.apache.uniffle.coordinator.metric.CoordinatorMetricsTest ‑ testJvmMetrics
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssign
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssignWithDifferentNodeNum
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testAssignmentShuffleNodesNum
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testRandomAssign
org.apache.uniffle.coordinator.strategy.assignment.BasicAssignmentStrategyTest ‑ testWithContinuousSelectPartitionStrategy
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssign
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentShuffleNodesNum
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithMustDiff
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithNone
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testAssignmentWithPreferDiff
org.apache.uniffle.coordinator.strategy.assignment.PartitionBalanceAssignmentStrategyTest ‑ testWithContinuousSelectPartitionStrategy
org.apache.uniffle.coordinator.strategy.assignment.PartitionRangeAssignmentTest ‑ test
org.apache.uniffle.coordinator.strategy.assignment.PartitionRangeTest ‑ test
org.apache.uniffle.coordinator.strategy.partition.ContinuousSelectPartitionStrategyTest ‑ test
org.apache.uniffle.coordinator.strategy.storage.AppBalanceSelectStorageStrategyTest ‑ selectStorageTest
org.apache.uniffle.coordinator.strategy.storage.AppBalanceSelectStorageStrategyTest ‑ storageCounterMulThreadTest
org.apache.uniffle.coordinator.strategy.storage.LowestIOSampleCostSelectStorageStrategyTest ‑ selectStorageMulThreadTest
org.apache.uniffle.coordinator.strategy.storage.LowestIOSampleCostSelectStorageStrategyTest ‑ selectStorageTest
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testExtractClusterConf
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testGenerateRanges
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testGenerateRangesGroup
org.apache.uniffle.coordinator.util.CoordinatorUtilsTest ‑ testNextId
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplications
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsPage
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithAppRegex
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithNoFilter
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithNull
org.apache.uniffle.coordinator.web.UniffleServicesRESTTest ‑ testGetApplicationsWithStartTimeAndEndTime
org.apache.uniffle.dashboard.web.utils.DashboardUtilsTest ‑ testConvertToMap
org.apache.uniffle.server.HealthScriptCheckerTest ‑ checkIsHealthy
org.apache.uniffle.server.KerberizedShuffleTaskManagerTest ‑ removeShuffleDataWithHdfsTest
org.apache.uniffle.server.LocalSingleStorageTypeFromEnvProviderTest ‑ testJsonSourceParse
org.apache.uniffle.server.LocalSingleStorageTypeFromEnvProviderTest ‑ testMultipleMountPoints
org.apache.uniffle.server.LocalStorageCheckerTest ‑ testCheckingStorageHang{File}
org.apache.uniffle.server.LocalStorageCheckerTest ‑ testGetUniffleUsedSpace{File}
org.apache.uniffle.server.ShuffleFlushManagerOnKerberizedHadoopTest ‑ clearTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ clearLocalTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ clearTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ complexWriteTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ concurrentWrite2HdfsWriteOfSinglePartition
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ concurrentWrite2HdfsWriteOneByOne
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ defaultFlushEventHandlerTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ fallbackWrittenWhenHybridStorageManagerEnableTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ hadoopConfTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ localMetricsTest{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ testCreateWriteHandlerFailed{File}
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ totalLocalFileWriteDataMetricTest
org.apache.uniffle.server.ShuffleFlushManagerTest ‑ writeTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ confByStringTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ confTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ defaultConfTest
org.apache.uniffle.server.ShuffleServerConfTest ‑ envConfTest
org.apache.uniffle.server.ShuffleServerGrpcMetricsTest ‑ testLatencyMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testGrpcMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testHadoopStorageWriteDataSize
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testJvmMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testNettyMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testServerMetrics
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testServerMetricsConcurrently
org.apache.uniffle.server.ShuffleServerMetricsTest ‑ testStorageCounter
org.apache.uniffle.server.ShuffleServerTest ‑ decommissionTest{boolean}[1]
org.apache.uniffle.server.ShuffleServerTest ‑ decommissionTest{boolean}[2]
org.apache.uniffle.server.ShuffleServerTest ‑ nettyServerTest
org.apache.uniffle.server.ShuffleServerTest ‑ startTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ hugePartitionConcurrentTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ hugePartitionTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ isHugePartitionTest
org.apache.uniffle.server.ShuffleTaskInfoTest ‑ partitionSizeSummaryTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ appPurgeWithLocalfileTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ checkAndClearLeakShuffleDataTest{File}
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ clearMultiTimesTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ clearTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ getBlockIdsByMultiPartitionTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ getBlockIdsByPartitionIdTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ hugePartitionMemoryUsageLimitTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ partitionDataSizeSummaryTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ registerShuffleTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeResourcesByShuffleIdsMultiTimesTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeShuffleDataWithHdfsTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ removeShuffleDataWithLocalfileTest
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testAddFinishedBlockIdsWithoutRegister
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testGetFinishedBlockIds
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testGetMaxConcurrencyWriting
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testRegisterShuffleAfterAppIsExpired
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ testStorageRemoveResourceHang{File}
org.apache.uniffle.server.ShuffleTaskManagerTest ‑ writeProcessTest
org.apache.uniffle.server.StorageCheckerTest ‑ checkTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ blockSizeMetricsTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ bufferManagerInitTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ bufferSizeTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ cacheShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ cacheShuffleDataWithPreAllocationTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushBufferTestWhenNotSelectedStorage{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushSingleBufferForHugePartitionTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ flushSingleBufferTest{File}
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ getShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ getShuffleDataWithExpectedTaskIdsTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ registerBufferTest
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ shuffleFlushThreshold
org.apache.uniffle.server.buffer.ShuffleBufferManagerTest ‑ shuffleIdToSizeTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendMultiBlocksTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendRepeatBlockTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ appendTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataWithExpectedTaskIdsFilterTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ getShuffleDataWithLocalOrderTest
org.apache.uniffle.server.buffer.ShuffleBufferWithLinkedListTest ‑ toFlushEventTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendMultiBlocksTest

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Test Results

1027 tests found (test 735 to 1027)

There are 1027 tests, see "Raw output" for the list of tests 735 to 1027.
Raw output
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendRepeatBlockTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ appendTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ getShuffleDataWithExpectedTaskIdsFilterTest
org.apache.uniffle.server.buffer.ShuffleBufferWithSkipListTest ‑ toFlushEventTest
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMergeWhenInterrupted{String, File}[1]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[1]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[2]
org.apache.uniffle.server.merge.BlockFlushFileReaderTest ‑ writeTestWithMerge{String, File}[3]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergeSegmentToMergeResult{String, File}[1]
org.apache.uniffle.server.merge.MergedResultTest ‑ testMergedResult
org.apache.uniffle.server.merge.ShuffleMergeManagerTest ‑ testMergerManager{String, File}[1]
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRegisterRemoteStorage
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveExpiredResourcesWithOneReplica{File}
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveExpiredResourcesWithTwoReplicas{File}
org.apache.uniffle.server.storage.HadoopStorageManagerTest ‑ testRemoveResources
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ fallbackTestWhenLocalStorageCorrupted
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ selectStorageManagerTest
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ testStorageManagerSelectorOfPreferCold
org.apache.uniffle.server.storage.HybridStorageManagerTest ‑ underStorageManagerSelectionTest
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testEnvStorageTypeProvider
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testGetLocalStorageInfo
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testInitLocalStorageManager
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testInitializeLocalStorage
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testNewAppWhileCheckLeak{ExtensionContext}
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testStorageSelection
org.apache.uniffle.server.storage.LocalStorageManagerTest ‑ testStorageSelectionWhenReachingHighWatermark
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testDefaultFallbackStrategy
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testHadoopFallbackStrategy
org.apache.uniffle.server.storage.StorageManagerFallbackStrategyTest ‑ testLocalFallbackStrategy
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutInsufficientConfigException{Integer, Integer, Integer, String}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutMaxPartitionsValueException{String, int, boolean}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutOverrides
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayoutUnsupportedMaxPartitions{String, int, boolean, String}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[10]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[11]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[12]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[13]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[14]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[15]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[16]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[17]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[18]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[19]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[1]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[20]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[21]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[22]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[23]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[24]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[25]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[26]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[27]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[28]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[29]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[2]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[30]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[31]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[3]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[4]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[5]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[6]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[7]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[8]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testConfigureBlockIdLayout{String, Integer, Boolean, String, int, int, int}[9]
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testFetchAndApplyDynamicConf
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetDefaultRemoteStorageInfo
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetTaskAttemptIdWithSpeculation
org.apache.uniffle.shuffle.manager.RssShuffleManagerBaseTest ‑ testGetTaskAttemptIdWithoutSpeculation
org.apache.uniffle.shuffle.manager.ShuffleManagerGrpcServiceTest ‑ testShuffleManagerGrpcService
org.apache.uniffle.shuffle.manager.ShuffleManagerServerFactoryTest ‑ testShuffleManagerServerType{ServerType}[1]
org.apache.uniffle.shuffle.manager.ShuffleManagerServerFactoryTest ‑ testShuffleManagerServerType{ServerType}[2]
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ getGetDeviceName
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ getGetFileStore{File}
org.apache.uniffle.storage.common.DefaultStorageMediaProviderTest ‑ testStorageProvider
org.apache.uniffle.storage.common.LocalStorageTest ‑ baseDirectoryInitTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ canWriteTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ canWriteTestWithDiskCapacityCheck
org.apache.uniffle.storage.common.LocalStorageTest ‑ diskStorageInfoTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ getCapacityInitTest
org.apache.uniffle.storage.common.LocalStorageTest ‑ writeHandlerTest
org.apache.uniffle.storage.common.ShuffleFileInfoTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopClientReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ createStreamAppendTest
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ createStreamTest
org.apache.uniffle.storage.handler.impl.HadoopFileReaderTest ‑ readDataTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamAppendTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamDirectory
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamFirstTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ createStreamTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeBufferArrayTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeBufferTest
org.apache.uniffle.storage.handler.impl.HadoopFileWriterTest ‑ writeSegmentTest
org.apache.uniffle.storage.handler.impl.HadoopHandlerTest ‑ initTest
org.apache.uniffle.storage.handler.impl.HadoopHandlerTest ‑ writeTest
org.apache.uniffle.storage.handler.impl.HadoopShuffleReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.HadoopShuffleReadHandlerTest ‑ testDataInconsistent
org.apache.uniffle.storage.handler.impl.KerberizedHadoopClientReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.KerberizedHadoopShuffleReadHandlerTest ‑ test
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ testReadIndex
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ writeBigDataTest{File}
org.apache.uniffle.storage.handler.impl.LocalFileHandlerTest ‑ writeTest{File}
org.apache.uniffle.storage.handler.impl.LocalFileServerReadHandlerTest ‑ testDataInconsistent
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ concurrentWrite
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ lazyInitializeWriterHandlerTest
org.apache.uniffle.storage.handler.impl.PooledHadoopShuffleWriteHandlerTest ‑ writeSameFileWhenNoRaceCondition
org.apache.uniffle.storage.util.ShuffleHadoopStorageUtilsTest ‑ testUploadFile{File}
org.apache.uniffle.storage.util.ShuffleKerberizedHadoopStorageUtilsTest ‑ testUploadFile{File}
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getPartitionRangeTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getShuffleDataPathWithRangeTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ getStorageIndexTest
org.apache.uniffle.storage.util.ShuffleStorageUtilsTest ‑ mergeSegmentsTest
org.apache.uniffle.storage.util.StorageTypeTest ‑ commonTest
org.apache.uniffle.test.AQERepartitionTest ‑ resultCompareTest
org.apache.uniffle.test.AQESkewedJoinTest ‑ resultCompareTest
org.apache.uniffle.test.AQESkewedJoinWithLocalOrderTest ‑ resultCompareTest
org.apache.uniffle.test.AccessCandidatesCheckerHadoopTest ‑ test
org.apache.uniffle.test.AccessCandidatesCheckerKerberizedHadoopTest ‑ test
org.apache.uniffle.test.AccessClusterTest ‑ org.apache.uniffle.test.AccessClusterTest
org.apache.uniffle.test.AssignmentWithTagsTest ‑ testTags
org.apache.uniffle.test.AutoAccessTest ‑ test
org.apache.uniffle.test.CombineByKeyTest ‑ combineByKeyTest
org.apache.uniffle.test.ContinuousSelectPartitionStrategyTest ‑ resultCompareTest
org.apache.uniffle.test.CoordinatorAdminServiceTest ‑ test
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testAssignmentServerNodesNumber
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testGetReShuffleAssignments
org.apache.uniffle.test.CoordinatorAssignmentTest ‑ testSilentPeriod
org.apache.uniffle.test.CoordinatorGrpcServerTest ‑ testGrpcConnectionSize
org.apache.uniffle.test.CoordinatorGrpcTest ‑ appHeartbeatTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ getShuffleAssignmentsTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ getShuffleRegisterInfoTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ rpcMetricsTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ shuffleServerHeartbeatTest
org.apache.uniffle.test.CoordinatorGrpcTest ‑ testGetPartitionToServers
org.apache.uniffle.test.CoordinatorReconfigureNodeMaxTest ‑ testReconfigureNodeMax
org.apache.uniffle.test.DynamicClientConfServiceHadoopTest ‑ test
org.apache.uniffle.test.DynamicClientConfServiceKerberlizedHadoopTest ‑ testConfInHadoop
org.apache.uniffle.test.DynamicConfTest ‑ dynamicConfTest
org.apache.uniffle.test.DynamicFetchClientConfTest ‑ test
org.apache.uniffle.test.FailingTasksTest ‑ testFailedTasks
org.apache.uniffle.test.FetchClientConfTest ‑ testFetchRemoteStorageByApp{File}
org.apache.uniffle.test.FetchClientConfTest ‑ testFetchRemoteStorageByIO{File}
org.apache.uniffle.test.FetchClientConfTest ‑ test{File}
org.apache.uniffle.test.GetReaderTest ‑ test
org.apache.uniffle.test.GetShuffleReportForMultiPartTest ‑ resultCompareTest
org.apache.uniffle.test.GroupByKeyTest ‑ groupByTest
org.apache.uniffle.test.HadoopConfTest ‑ hadoopConfTest
org.apache.uniffle.test.HealthCheckCoordinatorGrpcTest ‑ healthCheckTest
org.apache.uniffle.test.HealthCheckTest ‑ buildInCheckerTest
org.apache.uniffle.test.HealthCheckTest ‑ checkTest
org.apache.uniffle.test.LargeSorterTest ‑ largeSorterTest
org.apache.uniffle.test.MapSideCombineTest ‑ resultCompareTest
org.apache.uniffle.test.NullOfKeyOrValueTest ‑ nullOfKeyOrValueTest
org.apache.uniffle.test.PartitionBalanceCoordinatorGrpcTest ‑ getShuffleAssignmentsTest
org.apache.uniffle.test.PartitionBlockDataReassignBasicTest ‑ resultCompareTest
org.apache.uniffle.test.PartitionBlockDataReassignMultiTimesTest ‑ resultCompareTest
org.apache.uniffle.test.QuorumTest ‑ case1
org.apache.uniffle.test.QuorumTest ‑ case10
org.apache.uniffle.test.QuorumTest ‑ case11
org.apache.uniffle.test.QuorumTest ‑ case12
org.apache.uniffle.test.QuorumTest ‑ case2
org.apache.uniffle.test.QuorumTest ‑ case3
org.apache.uniffle.test.QuorumTest ‑ case4
org.apache.uniffle.test.QuorumTest ‑ case5{File}
org.apache.uniffle.test.QuorumTest ‑ case6
org.apache.uniffle.test.QuorumTest ‑ case7
org.apache.uniffle.test.QuorumTest ‑ case8
org.apache.uniffle.test.QuorumTest ‑ case9
org.apache.uniffle.test.QuorumTest ‑ quorumConfigTest
org.apache.uniffle.test.QuorumTest ‑ rpcFailedTest
org.apache.uniffle.test.RMWordCountTest ‑ wordCountTest
org.apache.uniffle.test.RSSStageDynamicServerReWriteTest ‑ testRSSStageResubmit
org.apache.uniffle.test.RSSStageResubmitTest ‑ testRSSStageResubmit
org.apache.uniffle.test.ReassignAndStageRetryTest ‑ resultCompareTest
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestMultiPartition{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTestWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTest ‑ remoteMergeWriteReadTest{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartitionWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestMultiPartition{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTestWithCombine{String}[2]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[1]
org.apache.uniffle.test.RemoteMergeShuffleWithRssClientTestWhenShuffleFlushed ‑ remoteMergeWriteReadTest{String}[2]
org.apache.uniffle.test.RepartitionWithHadoopHybridStorageRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithLocalFileRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryHybridStorageRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryRssTest ‑ resultCompareTest
org.apache.uniffle.test.RepartitionWithMemoryRssTest ‑ testMemoryRelease
org.apache.uniffle.test.RpcClientRetryTest ‑ testRpcRetryLogic{StorageType}[1]
org.apache.uniffle.test.RpcClientRetryTest ‑ testRpcRetryLogic{StorageType}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConfOverride{boolean}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConfOverride{boolean}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerClientConf{BlockIdLayout}[3]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[2]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManagerDynamicClientConf{BlockIdLayout}[3]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManager{boolean}[1]
org.apache.uniffle.test.RssShuffleManagerTest ‑ testRssShuffleManager{boolean}[2]
org.apache.uniffle.test.SecondarySortTest ‑ secondarySortTest
org.apache.uniffle.test.ServletTest ‑ testDecommissionServlet
org.apache.uniffle.test.ServletTest ‑ testDecommissionSingleNode
org.apache.uniffle.test.ServletTest ‑ testDecommissionedNodeServlet
org.apache.uniffle.test.ServletTest ‑ testGetSingleNode
org.apache.uniffle.test.ServletTest ‑ testLostNodesServlet
org.apache.uniffle.test.ServletTest ‑ testNodesServlet
org.apache.uniffle.test.ServletTest ‑ testRequestWithWrongCredentials
org.apache.uniffle.test.ServletTest ‑ testUnhealthyNodesServlet
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[1]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[2]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[3]
org.apache.uniffle.test.ShuffleServerConcurrentWriteOfHadoopTest ‑ testConcurrentWrite2Hadoop{int, int, boolean}[4]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ clearResourceTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ multipleShuffleResultTest{BlockIdLayout}[1]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ multipleShuffleResultTest{BlockIdLayout}[2]
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ registerTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ rpcMetricsTest
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ sendDataWithoutRequirePreAllocation
org.apache.uniffle.test.ShuffleServerGrpcTest ‑ shuffleResultTest
org.apache.uniffle.test.ShuffleServerInternalGrpcTest ‑ decommissionTest
org.apache.uniffle.test.ShuffleServerOnRandomPortTest ‑ startGrpcServerOnRandomPort
org.apache.uniffle.test.ShuffleServerOnRandomPortTest ‑ startStreamServerOnRandomPort
org.apache.uniffle.test.ShuffleServerWithLocalOfExceptionTest ‑ testReadWhenConnectionFailedShouldThrowException
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[1]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[2]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[3]
org.apache.uniffle.test.ShuffleServerWithMemLocalHadoopTest ‑ memoryLocalFileHadoopReadWithFilterTest{boolean, boolean}[4]
org.apache.uniffle.test.ShuffleUnregisterWithHadoopTest ‑ unregisterShuffleTest
org.apache.uniffle.test.ShuffleUnregisterWithLocalfileTest ‑ unregisterShuffleTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ emptyTaskTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ reportBlocksToShuffleServerIfNecessary
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ reportMultipleServerTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ rpcFailTest
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ testRetryAssgin
org.apache.uniffle.test.ShuffleWithRssClientTest ‑ writeReadTest
org.apache.uniffle.test.SimpleShuffleServerManagerTest ‑ testClientAndServerConnections
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest10{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest10{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest1{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest1{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest2{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest2{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest3{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest3{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest4{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest4{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest5{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest5{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest6{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest6{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest7{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest7{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest8{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest8{boolean}[2]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest9{boolean}[1]
org.apache.uniffle.test.SparkClientWithLocalTest ‑ readTest9{boolean}[2]
org.apache.uniffle.test.SparkSQLWithDelegationShuffleManagerFallbackTest ‑ resultCompareTest
org.apache.uniffle.test.SparkSQLWithDelegationShuffleManagerTest ‑ resultCompareTest
org.apache.uniffle.test.SparkSQLWithMemoryLocalTest ‑ resultCompareTest
org.apache.uniffle.test.TezCartesianProductTest ‑ cartesianProductTest
org.apache.uniffle.test.TezHashJoinTest ‑ hashJoinDoBroadcastTest
org.apache.uniffle.test.TezHashJoinTest ‑ hashJoinTest
org.apache.uniffle.test.TezOrderedWordCountTest ‑ orderedWordCountTest
org.apache.uniffle.test.TezSimpleSessionExampleTest ‑ simpleSessionExampleTest
org.apache.uniffle.test.TezSortMergeJoinTest ‑ sortMergeJoinTest
org.apache.uniffle.test.TezWordCountTest ‑ wordCountTest
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithNodeUnhealthyWhenAvoidRecomputeDisable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithNodeUnhealthyWhenAvoidRecomputeEnable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithTaskFailureWhenAvoidRecomputeDisable
org.apache.uniffle.test.TezWordCountWithFailuresTest ‑ wordCountTestWithTaskFailureWhenAvoidRecomputeEnable
org.apache.uniffle.test.WordCountTest ‑ wordCountTest
org.apache.uniffle.test.WriteAndReadMetricsTest ‑ test