Skip to content
Draft
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,11 @@ jobs:
run: |
sudo apt-get install ccache
sudo apt-get install lld


# install yaml library
- name: install yaml library
run: sudo apt-get install libyaml-dev

# setup LLVM
- name: install a specific version of LLVM
working-directory: ${{github.workspace}}
Expand Down
12 changes: 12 additions & 0 deletions .gitignore
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't check in this file.

Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
build/

.vscode/
*.swp
*.swo

generated-instructions.json

.DS_Store
Thumbs.db

/test/lit.cfg
4 changes: 3 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED YES)

add_compile_options(-g)
add_compile_options(-g -fexceptions)

# set(MLIR_DIR /home/lucas/llvm-project/build/lib/cmake/mlir)
# set(LLVM_DIR /home/lucas/llvm-project/build/lib/cmake/llvm)
Expand All @@ -20,6 +20,8 @@ message(STATUS "Using LLVMConfig.cmake in: ${LLVM_DIR}")
find_package(MLIR REQUIRED CONFIG)
find_package(LLVM REQUIRED CONFIG)

find_package(yaml-cpp REQUIRED)

list(APPEND CMAKE_MODULE_PATH "${MLIR_CMAKE_DIR}")

include_directories(${LLVM_INCLUDE_DIRS} ${MLIR_INCLUDE_DIRS})
Expand Down
34 changes: 33 additions & 1 deletion include/NeuraDialect/Architecture/Architecture.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include <set>
#include <unordered_map>
#include <vector>
#include <yaml-cpp/yaml.h>

namespace mlir {
namespace neura {
Expand All @@ -35,7 +36,37 @@ enum OperationKind {
IAdd = 0,
IMul = 1,
FAdd = 2,
FMul = 3
FMul = 3,
ISub = 4,
FSub = 5,
IDiv = 6,
FDiv = 7,
FAddFAdd = 8,
FMulFAdd = 9,
VFMul = 10,
ICmp = 11,
FCmp = 12,
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FCmp operation is defined in the enum but not handled in the getOperationKindFromMlirOp function in mapping_util.cpp, which could lead to incorrect operation mapping.

Copilot uses AI. Check for mistakes.
Not = 13,
Or = 14,
Sel = 15,
Cast = 16,
Phi = 17,
Load = 18,
LoadIndexed = 19,
Store = 20,
StoreIndexed = 21,
Br_ = 22,
CondBr_ = 23,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need _? Br and CondBr are reserved in C++?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that it will conflict with the MLIR library built-in if we do not use '_'? The compiler feedback is not quite informative.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about OpBr and OpCondBr? i.e., Op prefix for all.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure~

Return = 24,
LoopController = 25,
GrantAlways = 26,
GrantOnce = 27,
GrantPredicate = 28,
GEP_ = 29,
Constant = 30,
DataMov = 31,
CtrlMov = 32,
Reserve = 33
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't actually map Reserve, in other words, Reserve is just a placeholder to facilitate mapping, no HW needed to support it. Similarly, DataMov and CtrlMov are data movement, which would be data path traversing links/channels, not sure we need them or not.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I keep Data/CtrlMov temporarily then.

};

//===----------------------------------------------------------------------===//
Expand Down Expand Up @@ -324,6 +355,7 @@ struct PairHash {
class Architecture {
public:
Architecture(int width, int height);
Architecture(const YAML::Node& config);

Tile* getTile(int id);
Tile* getTile(int x, int y);
Expand Down
40 changes: 39 additions & 1 deletion lib/NeuraDialect/Architecture/Architecture.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Tile::Tile(int id, int x, int y) {

// TODO: Add function units based on architecture specs.
// @Jackcuii, https://github.com/coredac/dataflow/issues/82.
addFunctionUnit(std::make_unique<FixedPointAdder>(0));
// addFunctionUnit(std::make_unique<FixedPointAdder>(0));
}

int Tile::getId() const { return id; }
Expand Down Expand Up @@ -275,6 +275,44 @@ Architecture::Architecture(int width, int height) {
}
}

Architecture::Architecture(const YAML::Node& config) {
// Extract width and height from config
int width = 4; // default
int height = 4; // default

if (config["architecture"] && config["architecture"]["width"] && config["architecture"]["height"]) {
width = config["architecture"]["width"].as<int>();
height = config["architecture"]["height"].as<int>();
}

// Call the constructor with width and height.
*this = Architecture(width, height);

// Add function units based on the architecture specs.
int num_tiles = width * height;
Comment on lines +278 to +292
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using assignment operator on *this in a constructor is problematic and can lead to undefined behavior. Consider using constructor delegation or member initialization instead.

Suggested change
Architecture::Architecture(const YAML::Node& config) {
// Extract width and height from config
int width = 4; // default
int height = 4; // default
if (config["architecture"] && config["architecture"]["width"] && config["architecture"]["height"]) {
width = config["architecture"]["width"].as<int>();
height = config["architecture"]["height"].as<int>();
}
// Call the constructor with width and height.
*this = Architecture(width, height);
// Add function units based on the architecture specs.
int num_tiles = width * height;
Architecture::Architecture(const YAML::Node& config)
: Architecture(
config["architecture"] && config["architecture"]["width"] && config["architecture"]["height"]
? config["architecture"]["width"].as<int>()
: 4,
config["architecture"] && config["architecture"]["height"]
? config["architecture"]["height"].as<int>()
: 4) {
// Add function units based on the architecture specs.
int num_tiles = getNumTiles();

Copilot uses AI. Check for mistakes.
for (int i = 0; i < num_tiles; ++i) {
Tile *tile = getTile(i);
int fu_id = 0;
if (config["tile_overrides"][i]) {
// Override the default function units.
for (const auto& operation : config["tile_overrides"][i]["operations"]) {
if (operation.as<std::string>() == "add") {
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(++fu_id));
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable fu_id is pre-incremented before use, which means the first function unit will have ID 1 instead of 0. This inconsistency could cause issues if function unit IDs are expected to start from 0.

Suggested change
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(++fu_id));
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(fu_id++));

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think copilot's suggestion makes sense?

// Add more function units here if more operations are supported.
}
}
} else if (config["tile_defaults"]) {
// Add default function units.
for (const auto& operation : config["tile_defaults"]["operations"]) {
if (operation.as<std::string>() == "add") {
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(++fu_id));
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as above - fu_id should be post-incremented or the logic should be adjusted to start from 0.

Suggested change
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(++fu_id));
tile->addFunctionUnit(std::make_unique<FixedPointAdder>(fu_id++));

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

}
}
}
}
}


Tile *Architecture::getTile(int id) {
auto it = id_to_tile.find(id);
assert(it != id_to_tile.end() && "Tile with given ID not found");
Expand Down
4 changes: 4 additions & 0 deletions lib/NeuraDialect/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,10 @@ add_mlir_dialect_library(MLIRNeura
MLIRIR
MLIRSupport
MLIRInferTypeOpInterface
yaml-cpp
)

# Enable exception handling for yaml-cpp
target_compile_options(MLIRNeura PRIVATE -fexceptions)

add_subdirectory(Transforms)
44 changes: 34 additions & 10 deletions lib/NeuraDialect/Mapping/mapping_util.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,40 @@ using namespace mlir::neura;
namespace {

inline OperationKind getOperationKindFromMlirOp(Operation *op) {
if (isa<neura::AddOp>(op))
return IAdd;
if (isa<neura::MulOp>(op))
return IMul;
if (isa<neura::FAddOp>(op))
return FAdd;
if (isa<neura::FMulOp>(op))
return FMul;
// TODO: Complete the list here.
// @Jackcuii, https://github.com/coredac/dataflow/issues/82.
if (isa<neura::AddOp>(op)) return IAdd;
if (isa<neura::MulOp>(op)) return IMul;
if (isa<neura::FAddOp>(op)) return FAdd;
if (isa<neura::FMulOp>(op)) return FMul;
if (isa<neura::SubOp>(op)) return ISub;
if (isa<neura::FSubOp>(op)) return FSub;
if (isa<neura::DivOp>(op)) return IDiv;
if (isa<neura::FDivOp>(op)) return FDiv;
if (isa<neura::FAddFAddOp>(op)) return FAddFAdd;
if (isa<neura::FMulFAddOp>(op)) return FMulFAdd;
if (isa<neura::VFMulOp>(op)) return VFMul;
if (isa<neura::ICmpOp>(op)) return ICmp;
if (isa<neura::NotOp>(op)) return Not;
if (isa<neura::OrOp>(op)) return Or;
if (isa<neura::SelOp>(op)) return Sel;
if (isa<neura::CastOp>(op)) return Cast;
if (isa<neura::LoadOp>(op)) return Load;
if (isa<neura::LoadIndexedOp>(op)) return LoadIndexed;
if (isa<neura::StoreOp>(op)) return Store;
if (isa<neura::StoreIndexedOp>(op)) return StoreIndexed;
if (isa<neura::Br>(op)) return Br_;
if (isa<neura::CondBr>(op)) return CondBr_;
if (isa<neura::ReturnOp>(op)) return Return;
if (isa<neura::LoopControllerOp>(op)) return LoopController;
if (isa<neura::GrantAlwaysOp>(op)) return GrantAlways;
if (isa<neura::GrantOnceOp>(op)) return GrantOnce;
if (isa<neura::GrantPredicateOp>(op)) return GrantPredicate;
if (isa<neura::GEP>(op)) return GEP_;
if (isa<neura::ConstantOp>(op)) return Constant;
if (isa<neura::PhiOp>(op)) return Phi;
if (isa<neura::DataMovOp>(op)) return DataMov;
if (isa<neura::CtrlMovOp>(op)) return CtrlMov;
if (isa<neura::ReserveOp>(op)) return Reserve;
// Default fallback
return IAdd;
}

Expand Down
44 changes: 43 additions & 1 deletion lib/NeuraDialect/Transforms/MapToAcceleratorPass.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@
#include "mlir/Pass/Pass.h"
#include "mlir/Transforms/GreedyPatternRewriteDriver.h"
#include "llvm/Support/raw_ostream.h"
#include <cstdlib>
#include <fstream>
#include <yaml-cpp/yaml.h>

using namespace mlir;
using namespace mlir::neura;
Expand Down Expand Up @@ -46,6 +49,12 @@ struct MapToAcceleratorPass
"max_loc=5, max_depth=3)"),
llvm::cl::init("heuristic")};

Option<std::string> archSpecPath{
*this, "arch-spec",
llvm::cl::desc("Path to the architecture specification YAML file. "
"If not specified, will use default 4x4 architecture."),
llvm::cl::init("")};

void runOnOperation() override {
ModuleOp module = getOperation();

Expand Down Expand Up @@ -140,7 +149,40 @@ struct MapToAcceleratorPass
func->setAttr("RecMII", rec_mii_attr);

// AcceleratorConfig config{/*numTiles=*/8}; // Example
Architecture architecture(4, 4);
// Read architecture specification from command line option
YAML::Node config;
bool use_default_arch = false;

if (!archSpecPath.getValue().empty()) {
try {
std::ifstream file(archSpecPath.getValue());
if (file.is_open()) {
config = YAML::Load(file);
if (config["architecture"]) {
llvm::outs() << "\033[31m[MapToAcceleratorPass] Loaded architecture from "
<< archSpecPath.getValue() << "\033[0m\n";
} else {
llvm::errs() << "[MapToAcceleratorPass] Invalid YAML format in "
<< archSpecPath.getValue() << ", using default 4x4\n";
use_default_arch = true;
}
} else {
llvm::errs() << "[MapToAcceleratorPass] Could not open architecture file "
<< archSpecPath.getValue() << ", using default 4x4\n";
use_default_arch = true;
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable use_default_arch is set to true when the file cannot be opened, but this assignment is redundant since use_default_arch is already initialized to false and should remain false to trigger the YAML-based architecture creation.

Suggested change
use_default_arch = true;

Copilot uses AI. Check for mistakes.
}
} catch (const std::exception& e) {
llvm::errs() << "[MapToAcceleratorPass] Error parsing YAML file "
<< archSpecPath.getValue() << ": " << e.what() << ", using default 4x4\n";
use_default_arch = true;
}
} else {
use_default_arch = true;
llvm::errs() << "[MapToAcceleratorPass] No architecture specification provided, using default 4x4\n";
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put these into a function?


Architecture architecture = use_default_arch ? Architecture(4, 4) : Architecture(config);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

constexpr int kWidth = 4;
constexpr int kHeight = 4;
Architecture architecture = use_default_arch ? Architecture(kWidth, kHeight) : Architecture(config);

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the problems above are fixed, but I can still not reproduce the environment problem. (even in a locally deployed Docker) a bit strange

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean yaml-related env problem cannot be fixed?

-- Configuring incomplete, errors occurred!
  By not providing "Findyaml-cpp.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "yaml-cpp",
  but CMake did not find one.

  Could not find a package configuration file provided by "yaml-cpp" with any
  of the following names:

    yaml-cppConfig.cmake
    yaml-cpp-config.cmake

  Add the installation prefix of "yaml-cpp" to CMAKE_PREFIX_PATH or set
  "yaml-cpp_DIR" to a directory containing one of the above files.  If
  "yaml-cpp" provides a separate development package or SDK, be sure it has
  been installed.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep. The main problem is that even with a totally clean Docker env, the problem does not occurs. And I am not sure what the real path of them in a blackbox workflow env.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw some one said libyaml-cpp-dev (instead of libyaml-dev) may work. Let me try.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean yaml-related env problem cannot be fixed?

-- Configuring incomplete, errors occurred!
  By not providing "Findyaml-cpp.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "yaml-cpp",
  but CMake did not find one.

  Could not find a package configuration file provided by "yaml-cpp" with any
  of the following names:

    yaml-cppConfig.cmake
    yaml-cpp-config.cmake

  Add the installation prefix of "yaml-cpp" to CMAKE_PREFIX_PATH or set
  "yaml-cpp_DIR" to a directory containing one of the above files.  If
  "yaml-cpp" provides a separate development package or SDK, be sure it has
  been installed.

Cool. It seems to work! (Still something to be fixed :D)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome~!


int res_mii = calculateResMii(func, architecture);
IntegerAttr res_mii_attr =
IntegerAttr::get(IntegerType::get(func.getContext(), 32), res_mii);
Expand Down
19 changes: 19 additions & 0 deletions test/lit.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import os
import lit.formats

config.name = 'Neura Dialect Tests'
config.test_format = lit.formats.ShTest(True)
config.suffixes = ['.mlir']
config.test_source_root = os.path.dirname(__file__)
config.test_exec_root = os.path.dirname(__file__)
config.excludes = ['samples']

# Tool substitutions from CMake
config.substitutions.append(('mlir-neura-opt', '/home/jackcui/Arch/MLiR/dataflow/build/tools/mlir-neura-opt/mlir-neura-opt'))
config.substitutions.append(('neura-interpreter', '/home/jackcui/Arch/MLiR/dataflow/build/tools/neura-interpreter/neura-interpreter'))
config.substitutions.append(('neura-compiler', '/home/jackcui/Arch/MLiR/dataflow/build/tools/neura-compiler/neura-compiler'))
config.substitutions.append(('FileCheck', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/FileCheck'))
config.substitutions.append(('mlir-opt', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/mlir-opt'))
config.substitutions.append(('mlir-translate', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/mlir-translate'))
config.substitutions.append(('llc', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/llc'))
config.substitutions.append(('clang', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/clang'))
Copy link

Copilot AI Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded absolute paths make the configuration non-portable across different development environments. Consider using relative paths or environment variables to make this configuration more flexible.

Suggested change
config.substitutions.append(('mlir-neura-opt', '/home/jackcui/Arch/MLiR/dataflow/build/tools/mlir-neura-opt/mlir-neura-opt'))
config.substitutions.append(('neura-interpreter', '/home/jackcui/Arch/MLiR/dataflow/build/tools/neura-interpreter/neura-interpreter'))
config.substitutions.append(('neura-compiler', '/home/jackcui/Arch/MLiR/dataflow/build/tools/neura-compiler/neura-compiler'))
config.substitutions.append(('FileCheck', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/FileCheck'))
config.substitutions.append(('mlir-opt', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/mlir-opt'))
config.substitutions.append(('mlir-translate', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/mlir-translate'))
config.substitutions.append(('llc', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/llc'))
config.substitutions.append(('clang', '/home/jackcui/Arch/MLiR/llvm-project/build/./bin/clang'))
config.substitutions.append(('mlir-neura-opt', os.getenv('MLIR_NEURA_OPT_PATH', '')))
config.substitutions.append(('neura-interpreter', os.getenv('NEURA_INTERPRETER_PATH', '')))
config.substitutions.append(('neura-compiler', os.getenv('NEURA_COMPILER_PATH', '')))
config.substitutions.append(('FileCheck', os.getenv('FILECHECK_PATH', '')))
config.substitutions.append(('mlir-opt', os.getenv('MLIR_OPT_PATH', '')))
config.substitutions.append(('mlir-translate', os.getenv('MLIR_TRANSLATE_PATH', '')))
config.substitutions.append(('llc', os.getenv('LLC_PATH', '')))
config.substitutions.append(('clang', os.getenv('CLANG_PATH', '')))

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't check in this file. Similarly, plz remove all the build files. avoid using git add .

Loading