Dear experts,
I am calculating phonon linewidth recently by using following command:
srun -N X1 pw.x < scf.in > scf.out
srun -N X2 pw.x -nk X2 < nscf.in > nscf.out
srun -N X2 epw.x -nk X2 < epw.in > epw.out
The package is the latest version QE7.2 and EPW5.7.
The problem is that when X1 changes the obtained phonon linewidth also changes (for some mode the change is up to several order of magnitude).
I think the way running scf should not matter, and the final result of phonon linewidth should be the same even when X1 or X2 is changed.
On the other hand, do we need to use the same core number to run nscf and epw? Does it need to set the same npool (-nk) for nscf and epw?
I saw 'ibrun $PATHQE/bin/pw.x -nk 4 -in nscf.in > nscf.out; ibrun $PATHQE/bin/epw.x -nk 8 -in epw1.in > epw1.out' in the Hands-On tutorial 1 on Monday 2023: https://docs.epw-code.org/_downloads/c0 ... /M.H.1.pdf
However, 'For the nscf calculation the number of pool -npool has to be the same as the total number of core -np' and '(For the epw calculation)The number of cores and pool have to be the same as for the nscf.in run.' was mentioned on Docs/GaN-II: https://docs.epw-code.org/doc/GaN-II.html
So, does this setting depends on the package version? What is the correct way to set parallel parameters for scf, nscf, and epw?
Has anyone met similar issue? Any reply will be appreciated!
why does core number used for scf/nscf calculation affect epw?
Moderator: stiwari
Re: why does core number used for scf/nscf calculation affect epw?
Hi,
The number of cores/nodes/tasks you're using in PWScf step should not be impacting your final linewidth result. For parallelization of PW please check "https://www.quantum-espresso.org/Doc/us ... rallelized."
For EPW, there is only k-grid parallelization (-nk) available and the number of cores must be smaller than the total number of k-points for both epw1 (Wannierization step) and epw2 (interpolation step) calculations. It is not necessary to have the same -npool (nscf) and -nk (epw1). However, for epw, the number of k-point parallelization should be equal to the number of tasks.
For your particular problem, would be good to share input files so that we can recreate your error (if it still exists).
Best,
Sabya.
The number of cores/nodes/tasks you're using in PWScf step should not be impacting your final linewidth result. For parallelization of PW please check "https://www.quantum-espresso.org/Doc/us ... rallelized."
For EPW, there is only k-grid parallelization (-nk) available and the number of cores must be smaller than the total number of k-points for both epw1 (Wannierization step) and epw2 (interpolation step) calculations. It is not necessary to have the same -npool (nscf) and -nk (epw1). However, for epw, the number of k-point parallelization should be equal to the number of tasks.
For your particular problem, would be good to share input files so that we can recreate your error (if it still exists).
Best,
Sabya.