0

my mpirun on Ubuntu 20.04 dosent work anymore.

I used it a few weeks back and it works good and well. I just wanted to use it a few minutes ago and it doesent work.

I decompose the case first and then i use the

mpirun -np 4 interFoam -parallel

command.

Then this message appear.

kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$ mpirun -np 4 interFoam -parallel

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):

ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):

ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):

ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0)

It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):

ompi_mpi_init: ompi_rte_init failed --> Returned "(null)" (-43) instead of "Success" (0)

*** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3304] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3305] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3306] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [Kai-Desktop:3307] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!

Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.

mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:

Process name: [[48186,1],0] Exit code: 1

kai@Kai-Desktop:~/OpenFOAM/kai-7/run/tutorials_of/multiphase/interFoam/laminar/damBreak_stl_II/damBreak$

The Version of openMPI is this one:

kai@Kai-Desktop:~/Dokumente$ mpirun --version
mpirun (Open MPI) 4.0.3

I created a testfile *.c so far:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) { // Initialize the MPI environment MPI_Init(NULL, NULL);

// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &amp;world_size);

// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &amp;world_rank);

// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &amp;name_len);

// Print off a hello world message
printf(&quot;Hello world from processor %s, rank %d out of %d processors\n&quot;,
       processor_name, world_rank, world_size);

// Finalize the MPI environment.
MPI_Finalize();

}

If i compile and execute it, this will be the answer:

kai@Kai-Desktop:~/Dokumente$ mpirun -np 4 ./hello_world -parallel
Hello world from processor Kai-Desktop, rank 0 out of 4 processors
Hello world from processor Kai-Desktop, rank 1 out of 4 processors
Hello world from processor Kai-Desktop, rank 2 out of 4 processors
Hello world from processor Kai-Desktop, rank 3 out of 4 processors

Does anyone know what to do, to get rid of this error ? If you need further infos please write i will give it to you. I dont aware changed anything on the system related to mpi. I do an update from 18.04 to 20.04, but i dont know if this leads to this error.

Best regards Kai

Kai
  • 1

1 Answers1

0

I updated my OF v7 to OF v8 and it works again..

still dont know why ...

Kai
  • 1