From 47a042f64c1b8c23bd230607fc309f234a608649 Mon Sep 17 00:00:00 2001 From: TR-3B <144127816+MagellaX@users.noreply.github.com> Date: Mon, 25 Aug 2025 01:57:08 +0530 Subject: [PATCH] Normalize markdown spacing --- .editorconfig | 4 ++++ docs/Index.md | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/.editorconfig b/.editorconfig index 8b5fc1e..062e52f 100644 --- a/.editorconfig +++ b/.editorconfig @@ -7,3 +7,7 @@ charset = utf-8 end_of_line = lf insert_final_newline = true trim_trailing_whitespace = true + +[*.md] +indent_style = space +indent_size = 4 diff --git a/docs/Index.md b/docs/Index.md index 00859ec..e35ce40 100644 --- a/docs/Index.md +++ b/docs/Index.md @@ -116,4 +116,4 @@ The Triton file implements a similar fused attention kernel entirely in Python u By using PyTorch’s distributed package in the same script, the Triton implementation shows how to scale the fused attention kernel across multiple GPUs with minimal boilerplate code(which is pretty good). This provides a very accessible route for researchers to experiment with and iterate on advanced GPU kernels without delving deeply into low-level CUDA programming. -In short way, what i am actually doing is re-implementing a sophisticated, high-performance attention mechanism in a more maintainable and experiment-friendly environment and providing a research prototype that can serve as the basis for future production-grade attention mechanisms.... \ No newline at end of file +In short way, what i am actually doing is re-implementing a sophisticated, high-performance attention mechanism in a more maintainable and experiment-friendly environment and providing a research prototype that can serve as the basis for future production-grade attention mechanisms....