Dear Samuel,
It was indeed the pseudopotential that is causing the problem. i tried a pz (LDA) pseudopotential and no mpirun errors anymore. however the code stopped with an error:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine wannierize (1):
inconsistent nscf and elph k-grids
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
I do not understand why. I have nk1=12, nq2=12, nq3=12 and nk1=6, nk2=6, nk3=6 (which give the 28 q points).
MPIRUN errors
Moderator: stiwari
-
- Posts: 82
- Joined: Thu May 05, 2016 5:18 pm
- Affiliation:
Re: MPIRUN errors
Physics Department
university of Rondonia Brazil
Porto Velho- Rondonia
university of Rondonia Brazil
Porto Velho- Rondonia
-
- Posts: 155
- Joined: Thu Jan 14, 2016 10:52 am
- Affiliation:
Re: MPIRUN errors
Hi,
From your input I see you have nk1=12, nk2=12, nk3=12. This requires you to run the nscf calculation beforehand with a 12x12x12 uniform mesh in the input. If this is not the case you get an error.
Best
Carla
From your input I see you have nk1=12, nk2=12, nk3=12. This requires you to run the nscf calculation beforehand with a 12x12x12 uniform mesh in the input. If this is not the case you get an error.
Best
Carla
-
- Posts: 5
- Joined: Tue Jun 07, 2016 2:11 pm
- Affiliation:
Re: MPIRUN errors
I don't think you can have a q grid which is finer than your k grid, since you would then get k+q points for which you do not have the electronic wavefunction, so you should always choose nk1=i*nq1 where i is an integer number.
-
- Posts: 82
- Joined: Thu May 05, 2016 5:18 pm
- Affiliation:
Re: MPIRUN errors
Thanks Carla,
This mistake was obvious. Sorry. i was just in a hurry. Everything is running smoothly again but the code is "frozen" at the following stage since yesterday:
Using uniform q-mesh: 20 20 20
Size of q point mesh for interpolation: 8000
Using uniform k-mesh: 20 20 20
Size of k point mesh for interpolation: 16000
Max number of k points per pool: 500
Fermi energy coarse grid = -1.184660 eV
===================================================================
Fermi energy is read from the input file: Ef = -1.184700 eV
===================================================================
ibndmin = 6 ebndmin = -0.183
ibndmax = 7 ebndmax = -0.089
Number of ep-matrix elements per pool : 9000 ~= 70.31 Kb (@ 8 bytes/ DP)
Is it because of the huge number of points??
The number of processors and pools are such that:
Parallel version (MPI & OpenMP), running on 512 processor cores
Number of MPI processes: 32
Threads/MPI process: 16
K-points division: npool = 32
This mistake was obvious. Sorry. i was just in a hurry. Everything is running smoothly again but the code is "frozen" at the following stage since yesterday:
Using uniform q-mesh: 20 20 20
Size of q point mesh for interpolation: 8000
Using uniform k-mesh: 20 20 20
Size of k point mesh for interpolation: 16000
Max number of k points per pool: 500
Fermi energy coarse grid = -1.184660 eV
===================================================================
Fermi energy is read from the input file: Ef = -1.184700 eV
===================================================================
ibndmin = 6 ebndmin = -0.183
ibndmax = 7 ebndmax = -0.089
Number of ep-matrix elements per pool : 9000 ~= 70.31 Kb (@ 8 bytes/ DP)
Is it because of the huge number of points??
The number of processors and pools are such that:
Parallel version (MPI & OpenMP), running on 512 processor cores
Number of MPI processes: 32
Threads/MPI process: 16
K-points division: npool = 32
Physics Department
university of Rondonia Brazil
Porto Velho- Rondonia
university of Rondonia Brazil
Porto Velho- Rondonia
Re: MPIRUN errors
Dear eliephys78,
Its probably not frozen but just computing and showing no progression.
This has been improved for the new version (September) and progress will be shown.
If the calculation runs with smaller grid and you do not run into memory issue (you usually can check that by login on the node if you have the access) then its just a matter of computing it.
Best,
Samuel
Its probably not frozen but just computing and showing no progression.
This has been improved for the new version (September) and progress will be shown.
If the calculation runs with smaller grid and you do not run into memory issue (you usually can check that by login on the node if you have the access) then its just a matter of computing it.
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Re: MPIRUN errors
Kinda late....but thank you! Nice post. I like it. Thanks for sharing these information.
_____________________________________________________________________________________________
popular bubble football?Bubble soccer
_____________________________________________________________________________________________
popular bubble football?Bubble soccer
-
- Posts: 82
- Joined: Thu May 05, 2016 5:18 pm
- Affiliation:
Re: MPIRUN errors
Thank you all for your help and support
Physics Department
university of Rondonia Brazil
Porto Velho- Rondonia
university of Rondonia Brazil
Porto Velho- Rondonia
Re: MPIRUN errors
Waiting for input...
Waiting for input...
IMPORTANT: XC functional enforced from input :
Exchange-correlation= PZ
( 1 1 0 0 0 0 0)
Any further DFT definition will be discarded
Please, verify this is what you really want
G-vector sticks info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Sum 379 379 163 6657 6657 1907
Using Slab Decomposition
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine read_rhog (1):
error reading file ./MgB2.save/charge-density
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
This is the error is occurred when I tried to repeat EPW example provided in GitHub.The EPW version installed in the batch system is EPW 5.4.1. The charge density file is already in the directory but not read while compiling epw.x job program. How I fix this error? Any help is appriciated
Waiting for input...
IMPORTANT: XC functional enforced from input :
Exchange-correlation= PZ
( 1 1 0 0 0 0 0)
Any further DFT definition will be discarded
Please, verify this is what you really want
G-vector sticks info
--------------------
sticks: dense smooth PW G-vecs: dense smooth PW
Sum 379 379 163 6657 6657 1907
Using Slab Decomposition
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine read_rhog (1):
error reading file ./MgB2.save/charge-density
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
This is the error is occurred when I tried to repeat EPW example provided in GitHub.The EPW version installed in the batch system is EPW 5.4.1. The charge density file is already in the directory but not read while compiling epw.x job program. How I fix this error? Any help is appriciated
Re: MPIRUN errors
Thanks, dear all! I revealed the trick by myself: if it is helpful to someone Looks how I revealed it.
1. error has occurred while I use the run program
mpiexec -np 32 pw.x <epw.in>epw.out
2. So, my problem is solved when i use
mpiexec /home/kunsa24/q-e-qe-7.0/bin/epw.x -nk 16 -in epw.in>epw.out
Thanks. Some one may get it helpful.
1. error has occurred while I use the run program
mpiexec -np 32 pw.x <epw.in>epw.out
2. So, my problem is solved when i use
mpiexec /home/kunsa24/q-e-qe-7.0/bin/epw.x -nk 16 -in epw.in>epw.out
Thanks. Some one may get it helpful.