Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dim adaptive combi #105

Open
wants to merge 35 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
8573220
Initial commit of the StaticCombiScheme, a revised version of the Com…
Apr 24, 2018
27bca39
added comments to the StaticCombiScheme
Apr 29, 2018
644f985
first draft of the error/ potential gain calculation for the adaptive…
May 29, 2018
7408508
added missing semicolon
May 29, 2018
5b76593
Resolved merge conflict
May 30, 2018
01af6f8
implemented more functionallity needed for the dimension adaptive scheme
Jun 22, 2018
ac65125
fixed some bugs mainly concerning getting the best expansion in the d…
Jun 27, 2018
8ba983a
Merge branch 'dimAdaptiveCombi' of https://simsgs.informatik.uni-stut…
Jun 27, 2018
8c2d131
added adaptive example
Jun 27, 2018
d030caa
first seemingly functioning version of the adaptive combigrid algorithm
Jul 19, 2018
ba16977
updated the adaptive_combi_example
Jul 19, 2018
15502f1
fixed bugs in the adaptive combi scheme
Jul 30, 2018
e0f67e4
changed the basic adaptive example to the advection example from the …
Jul 30, 2018
31901ab
updated adaptive scheme and gene distributed example
Oct 8, 2018
53c236d
removed debug output
Oct 31, 2018
fcee414
added support for more points for calculating the error measure
Dec 2, 2018
4535fde
fixed the search for the backward neighbour in the error measure calc…
Dec 2, 2018
dfb45f9
Merge branch 'dimAdaptiveCombi' of https://simsgs.informatik.uni-stut…
Dec 2, 2018
a37e139
ProcessGroupWorker now compiles with gcc 4 again
Dec 3, 2018
e084e8f
fixed a bug in the expansion algorithm
Dec 5, 2018
fbbfb9f
fixed some bugs regarding the error measure calculation
Jan 17, 2019
4c77fc0
removed dead code and cleaned up the implementation
Jan 17, 2019
ae4399f
fixed the recursive level generation in the StaticCombiScheme and mor…
Jan 27, 2019
91aa61f
added regression test for the level generation
Jan 27, 2019
2333ca2
made the previously added regression tests comaptible with boost vers…
Jan 27, 2019
a87aa67
fixed two bugs in the expansion algorithm and more cleanup
Jan 28, 2019
4b60ef5
Expansions on the lmin boundary now correctly add all of their subgri…
Jan 30, 2019
c2b7c7a
fixed a bug concerning the determination of the error measure partner…
Jan 30, 2019
43d55f2
added comments and fixed a mpi bug, where the groups would reduce err…
Jan 30, 2019
9c8c88a
the containsAllBwdNeighboursInv function now correctly deals with dum…
Jan 30, 2019
3e1b72b
changed DistributedSparseGridUniform.hpp to make the merge easier
Feb 13, 2019
39fa36b
fixed a bug in the previous commit
Feb 13, 2019
34f9693
changed ProcessGroupWorker to make the merge with master easier
Feb 13, 2019
47ec33a
more changes to ProcessGroupWorker to make the merge with master easier
Feb 14, 2019
9b08da4
merged master
Feb 16, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions distributedcombigrid/examples/adaptive_combi_example/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
CC=mpic++
CFLAGS=-std=c++11 -g -fopenmp -Wno-deprecated-declarations -Wno-unused-local-typedefs -Wno-deprecated -Wno-uninitialized -Wall

SGPP_DIR=/home/simon/Uni/master/idp/combi

LD_SGPP=-L$(SGPP_DIR)/lib/sgpp
INC_SGPP=-I$(SGPP_DIR)/distributedcombigrid/src/

LDIR=$(LD_SGPP)
INC=$(INC_SGPP)

LIBS=-lsgppdistributedcombigrid -lboost_serialization

all: combi_example

combi_example: combi_example.cpp TaskExample.hpp
$(CC) $(CFLAGS) $(LDIR) $(INC) -o combi_example combi_example.cpp $(LIBS)

clean:
rm -f *.o out/* combi_example
245 changes: 245 additions & 0 deletions distributedcombigrid/examples/adaptive_combi_example/TaskExample.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
/*
* TaskExample.hpp
*
* Created on: Sep 25, 2015
* Author: heenemo
*/

#ifndef TASKEXAMPLE_HPP_
#define TASKEXAMPLE_HPP_

#include "sgpp/distributedcombigrid/fullgrid/DistributedFullGrid.hpp"
#include "sgpp/distributedcombigrid/task/Task.hpp"

namespace combigrid {

class TaskExample: public Task {

public:
/* if the constructor of the base task class is not sufficient we can provide an
* own implementation. here, we add dt, nsteps, and p as a new parameters.
*/
TaskExample(DimType dim, LevelVector& l, std::vector<bool>& boundary,
real coeff, LoadModel* loadModel, real dt,
size_t nsteps, IndexVector p = IndexVector(0),FaultCriterion *faultCrit = (new StaticFaults({0,IndexVector(0),IndexVector(0)})) ) :
Task(dim, l, boundary, coeff, loadModel, faultCrit), dt_(dt), nsteps_(
nsteps), p_(p), initialized_(false), stepsTotal_(0), dfg_(NULL) {
}

void init(CommunicatorType lcomm, std::vector<IndexVector> decomposition = std::vector<IndexVector>()){
assert(!initialized_);
assert(dfg_ == NULL);

int lrank;
MPI_Comm_rank(lcomm, &lrank);

/* create distributed full grid. we try to find a balanced ratio between
* the number of grid points and the number of processes per dimension
* by this very simple algorithm. to keep things simple we require powers
* of two for the number of processes here. */
int np;
MPI_Comm_size(lcomm, &np);

// check if power of two
if (!((np > 0) && ((np & (~np + 1)) == np)))
assert(false && "number of processes not power of two");

DimType dim = this->getDim();
IndexVector p(dim, 1);
const LevelVector& l = this->getLevelVector();

if (p_.size() == 0) {
// compute domain decomposition
IndexType prod_p(1);

while (prod_p != static_cast<IndexType>(np)) {
DimType dimMaxRatio = 0;
real maxRatio = 0.0;

for (DimType k = 0; k < dim; ++k) {
real ratio = std::pow(2.0, l[k]) / p[k];

if (ratio > maxRatio) {
maxRatio = ratio;
dimMaxRatio = k;
}
}

p[dimMaxRatio] *= 2;
prod_p = 1;

for (DimType k = 0; k < dim; ++k)
prod_p *= p[k];
}
} else {
p = p_;
}

if (lrank == 0) {
std::cout << "init task " << this->getID() << " with l = "
<< this->getLevelVector() << " and p = " << p << std::endl;
}

// create local subgrid on each process
dfg_ = new DistributedFullGrid<CombiDataType>(dim, l, lcomm,
this->getBoundary(), p);

/* loop over local subgrid and set initial values */
std::vector<CombiDataType>& elements = dfg_->getElementVector();

phi_.resize(elements.size());
assert(phi_.size() == elements.size());

for (IndexType li = 0; li < elements.size(); ++li) {
std::vector<double> coords(this->getDim());
dfg_->getCoordsGlobal(li, coords);

double exponent = 0;
for (DimType d = 0; d < this->getDim(); ++d) {
exponent -= std::pow(coords.at(d) - 0.5, 2);
}
dfg_->getElementVector().at(li) = std::exp(exponent*100.0) * 2;
}

initialized_ = true;
}


/* this is were the application code kicks in and all the magic happens.
* do whatever you have to do, but make sure that your application uses
* only lcomm or a subset of it as communicator.
* important: don't forget to set the isFinished flag at the end of the computation.
*/
void run(CommunicatorType lcomm) {
std::cout << "run my task\n";
assert(initialized_);

int lrank;
MPI_Comm_rank(lcomm, &lrank);

/* pseudo timestepping to demonstrate the behaviour of your typical
* time-dependent simulation problem. */
std::vector<CombiDataType> u(this->getDim(), 0.0000001);

// gradient of phi
std::vector<CombiDataType> dphi(this->getDim());

std::vector<IndexType> l(this->getDim());
std::vector<double> h(this->getDim());

for (int i = 0; i < this->getDim(); i++){
l[i] = dfg_->length(i);
h[i] = 1.0 / (double)l[i];
}

for (size_t i = 0; i < nsteps_; ++i) {
phi_.swap(dfg_->getElementVector());

for (IndexType li = 0; li < dfg_->getElementVector().size(); ++li) {
IndexVector ai(this->getDim());
dfg_->getGlobalVectorIndex(li, ai);

//neighbour
std::vector<IndexVector> ni(this->getDim(), ai);
std::vector<IndexType> lni(this->getDim());

CombiDataType u_dot_dphi = 0;

for(int j = 0; j < this->getDim(); j++){
ni[j][j] = (l[j] + ni[j][j] - 1) % l[j];
lni[j] = dfg_->getGlobalLinearIndex(ni[j]);
}

for(int j = 0; j < this->getDim(); j++){
//calculate gradient of phi with backward differential quotient
dphi.at(j) = (phi_.at(li) - phi_.at(lni.at(j))) / h.at(j);

u_dot_dphi += u[j] * dphi[j];
}

dfg_->getData()[li] = phi_[li] - u_dot_dphi * dt_;
}

MPI_Barrier(lcomm);
}

stepsTotal_ += nsteps_;

this->setFinished(true);
}

/* this function evaluates the combination solution on a given full grid.
* here, a full grid representation of your task's solution has to be created
* on the process of lcomm with the rank r.
* typically this would require gathering your (in whatever way) distributed
* solution on one process and then converting it to the full grid representation.
* the DistributedFullGrid class offers a convenient function to do this.
*/
void getFullGrid(FullGrid<CombiDataType>& fg, RankType r,
CommunicatorType lcomm, int n = 0) {
assert(fg.getLevels() == dfg_->getLevels());

dfg_->gatherFullGrid(fg, r);
}

DistributedFullGrid<CombiDataType>& getDistributedFullGrid(int n = 0) {
return *dfg_;
}


void setZero(){

}

protected:
/* if there are local variables that have to be initialized at construction
* you have to do it here. the worker processes will create the task using
* this constructor before overwriting the variables that are set by the
* manager. here we need to set the initialized variable to make sure it is
* set to false. */
TaskExample() :
initialized_(false), stepsTotal_(1), dfg_(NULL) {
}

~TaskExample() {
if (dfg_ != NULL)
delete dfg_;
}

private:
friend class boost::serialization::access;

// new variables that are set by manager. need to be added to serialize
real dt_;
size_t nsteps_;
IndexVector p_;

// pure local variables that exist only on the worker processes
bool initialized_;
size_t stepsTotal_;
DistributedFullGrid<CombiDataType>* dfg_;
std::vector<CombiDataType> phi_;

/**
* The serialize function has to be extended by the new member variables.
* However this concerns only member variables that need to be exchanged
* between manager and workers. We do not need to add "local" member variables
* that are only needed on either manager or worker processes.
* For serialization of the parent class members, the class must be
* registered with the BOOST_CLASS_EXPORT macro.
*/
template<class Archive>
void serialize(Archive& ar, const unsigned int version) {
// handles serialization of base class
ar& boost::serialization::base_object<Task>(*this);

// add our new variables
ar& dt_;
ar& nsteps_;
ar& p_;
}
};

} // namespace combigrid

#endif /* TASKEXAMPLE_HPP_ */
Loading