Skip to content

conflict with the original paper #26

@yhao-z

Description

@yhao-z

I don't think this implementation consistent with the original paper (RAKI).

In Network.py, the RAKI network are build by,

        net = nn.Sequential(
            nn.Conv2d(in_channels=2 * self.channels_in, out_channels=128, kernel_size=[3, 5], padding=[1, 2], padding_mode='replicate'),
            nn.ReLU(),
            nn.Conv2d(in_channels=128, out_channels=64, kernel_size=[1, 1], padding=[0, 0], padding_mode='replicate'),
            nn.ReLU(),
            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=[1, 3], padding=[0, 1], padding_mode='replicate'),
            nn.ReLU(),
            nn.Conv2d(in_channels=64, out_channels=self.R * 2 * self.channels_in, kernel_size=[3, 3], padding=[1, 1], padding_mode='replicate'),
        ).to(self.device)

There is no conv dialation in your implement, while the paper has mensioned that

In our implementations, all layers use kernel dilation of size R in the ky direction to only process the acquired k‐space lines.

Also, the paper shows that the network only has three convs, but your code contains four. The kernel size is not consistent with the paper too.

I'm not sure whether your implement improves the reconstruction performance or not? Did u intentionally change the code for improving performance?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions