Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design Discussions #2

Open
czgdp1807 opened this issue Feb 9, 2020 · 5 comments
Open

Design Discussions #2

czgdp1807 opened this issue Feb 9, 2020 · 5 comments
Labels

Comments

@czgdp1807
Copy link
Member

Description of the problem

This issue aims at specifying the workflow and discussing the design of the framework for BNN.

Workflow

  1. Modules - The modules that should be a part of the framework(in this issue).
  2. APIs for each module finalised above(in separate issues).
  3. Class/Function design for each of the above APIs.
  4. Implementation in a PR.
  5. Testing.

If you have anything to suggest about the modules and their organization then please comment below.

Example of the problem

References/Other comments

@czgdp1807
Copy link
Member Author

I have thought of the following modules,

  1. core - Containing the data structures(Matrix, and others) for CPU(including parallel computing support on CPU using threads).
  2. cuda - Containing CUDA support for most probably the data structures in the core module and some operations in the operations module. We will start with only single GPU support.
  3. layers - Containing various layers in BNNs.
  4. operations - Containing operations like back-propagation, update, train, classify, test.
  5. wrappers - Containing wrappers for various languages. We will start only with python wrappers. Boost.Python will be used here.
  6. tests - Containing test suites for the above modules. Boost.Test for cuda tests and GoogleTest for the rest.

Note that CUDA tests will not be enabled on CI because they are to costly and Travis CI doesn't provide support GPU enables tests. So, we may have to establish a scheme for obtaining the reports of tests on various PRs before merging.

Let me know of any modifications or clarifications. Thanks.

@czgdp1807
Copy link
Member Author

Let's proceed forward with the discussion here.
So, we will start with core module. We will be only needing matrices(multidimensional ones like 2D, 3D) as ConvNets are very general in using them. We can build a base class, Matrix and then for multidimensional matrices we will just create class MatrixND with N-sized array of Matrix as it's data member. The on going PR at codezonediitj/adaboost#5 has the code for 2D Matrices.
I was just wondering if we should rename Matrix to Tensor? What do you say?
ping @pristineVedansh
P.S. We don't need Boost.Test as after compiling the .cu files, these can be linked with gtest code.

@vedanshpriyadarshi
Copy link
Member

vedanshpriyadarshi commented Feb 16, 2020

Great idea! We should rename Matrix to Tensor.

@czgdp1807
Copy link
Member Author

We will be required to use 1D layout(Row Major Order) for arbitrary dimensional Tensors.

@czgdp1807
Copy link
Member Author

We should also, have autodiff operator in operators module which was left out and is crucial. It should be specialised for the BNNs. I will come up with the API design of Tensors in a separate issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants