Known issues in parallel

From Gerris

Jump to: navigation, search

A two-box simulation gives different results with one or two pids

We run a simplified version of the Benard Von-Karman test case for zero time steps. Only the approximate projection is performed. The modified simulation file is

2 1  GfsSimulation GfsBox GfsGEdge {} {
  Time { iend = 0 }
  Refine 6
  Solid (x*x + y*y - 0.0625*0.0625)
  AdvectionParams{
  scheme = none
}

  OutputTime { istep = 1 } stderr
  OutputProjectionStats { istep = 1 } stderr
  OutputSimulation { start = 0.1 step = 0.1} simulation.gfs {
      variables = U,V,P
  }
}

GfsBox { id=1 pid=0
  left = Boundary {
       BcDirichlet U 1
  }
}
GfsBox { id=2 pid=1 right = BoundaryOutflow }
1 2 right

We use the following version of Gerris

% gerris2D -V
gerris: using 2D libgfs version 1.3.2 (120310-112425)
  compiled with flags:  -DBSD_SOURCE -D_DARWIN_C_SOURCE -D_DARWIN_C_SOURCE
  MPI:          yes
  pkg-config:   yes
  m4:           yes

First we run without mpi on MacOS 10.7.3 on a MacBook Pro with a four-core intel i7 system, then with mpi. The mpi and compiler versions are

% mpicc --version
Apple clang version 3.1 (tags/Apple/clang-318.0.54) (based on LLVM 3.1svn)
Target: x86_64-apple-darwin11.3.0
Thread model: posix
% mpirun --version
mpirun (Open MPI) 1.5.4

Report bugs to http://www.open-mpi.org/community/help/

Here is the result without mpi:

% gerris2D twobox-twopid.gfs
step:       0 t:      0.00000000 dt:  1.000000e-01 cpu:      0.12000000 real:      0.12236900
Approximate projection
    niter:   13
    residual.bias:   -1.000e-01 -1.984e-04
    residual.first:   5.020e-02  9.960e-05    1.6
    residual.second:  5.668e-01  1.330e-04    1.9
    residual.infty:   6.400e+00  6.251e-04      2

On the other hand, if we run the same simulation with mpi and two pids, this is the result:

% mpirun -np 2 gerris2D twobox-twopid.gfs
step:       0 t:      0.00000000 dt:  1.000000e-01 cpu:      0.04000000 real:      0.03555900
Approximate projection
    niter:    4
    residual.bias:   -1.000e-01 -7.446e-05
    residual.first:   5.020e-02  3.839e-05      6
    residual.second:  5.668e-01  4.914e-05     10
    residual.infty:   6.400e+00  2.713e-04     12

The two results are different: the pre-iteration Projection statistics (first column) are the same but the post-iteration Projection statistics are different. However, since there are the same number of boxes, the mpi communication should send the same information that is exchanged between boxes in the non-mpi run. Thus something is amiss in the way information is exchanged between boxes.

Not necessarily so. The Poisson solver uses Jacobi relaxations for smoothing. This means that the order in which cells are traversed (i.e. "relaxed") matters. The order of traversal will not be the same when two neighboring boxes belong to the same processor and when they don't: this could explain the differences in convergence rates (although I would expect the difference to be smaller than in your example) --Popinet 22:42, 11 March 2012 (UTC)

We have also run the same test case on an Ubuntu system. The results are identical.

Using larger number of boxes and pids (typically 24), we found cases where the non-mpi runs converge but the mpi run do not converge, i.e. the residual is not reduced below the required minimum of 0.001 .

This is more interesting. Could you please post more details? --Popinet 22:42, 11 March 2012 (UTC)
I have done so on the mailing list on March 12, topic "Projections do not converge with 24 pids in 3D" -- Zaleski 09:37, 14 March 2012 (UTC)
Views
Navigation
communication