FP64 GEMM is already enabled on GPU: pytorch/pytorch#140677
Need to remove: https://github.com/intel/torch-xpu-ops/blob/5b5c3fce76a7d0946edfb858fcf05697e2c172b4/.github/scripts/apply_torch_pr.py#L13C10-L13C56
Using this patch makes it harder to find errors in the accuracy calculation + it does not work for detectron2 models. This can be seen from the warning: WARNING:common:current_device=xpu; error:dets should have the same type as scores
, which hides RuntimeError: dets should have the same type as scores
from detectron2. I have a fix for it in facebookresearch/detectron2#5479 but I doubt that it will be merged, since the repository is not very active.
cc @chuanqi129 @mengfei25 @etaf