EPW stop near the end of calculation without any CRASH
Posted: Thu May 09, 2019 5:29 pm
Dear all,
i encounter an issue when i run the latest version EPW (i.e. QE6.4.1). That is, when i use nk= 14 14 1, nq = 7 7 1, and interpolate to nkf = 30 30 1, nqf = 30 30 1, the calculation run successfully. But when i use a denser nk = 18 18 1, q = 9 9 1, and keep the interpolation unchenged(nkf = 30 30 1, nqf = 30 30 1), the code stop near the end of calculation without any CRASH ,as follow shows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Writing Hamiltonian, Dynamical matrix and EP vertex in Wann rep to file
Reading Hamiltonian, Dynamical matrix and EP vertex in Wann rep from file
Reading interatomic force constants
IFC last 0.0079579
Imposed simple ASR
Finished reading ifcs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I checked the source code, and the relevant code is as follow :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IF (lifc) CALL read_ifc
!
IF (etf_mem == 0) then
IF (.not. ALLOCATED(epmatwp)) ALLOCATE ( epmatwp ( nbndsub, nbndsub, nrr_k, nmodes, nrr_g) )
epmatwp = czero
IF (mpime.eq.ionode_id) THEN
! SP: The call to epmatwp is now inside the loop
! This is important as otherwise the lrepmatw integer
! could become too large for integer(kind=4).
! Note that in Fortran the record length has to be a integer
! of kind 4.
lrepmatw = 2 * nbndsub * nbndsub * nrr_k * nmodes
filint = trim(prefix)//'.epmatwp'
CALL diropn (iunepmatwp, 'epmatwp', lrepmatw, exst)
DO irg = 1, nrr_g
CALL davcio ( epmatwp(:,:,:,:,irg), lrepmatw, iunepmatwp, irg, -1 )
ENDDO
!
CLOSE(iunepmatwp)
ENDIF
!
CALL mp_bcast (epmatwp, ionode_id, inter_pool_comm)
CALL mp_bcast (epmatwp, root_pool, intra_pool_comm)
!
ENDIF
!
CALL mp_barrier(inter_pool_comm)
IF (mpime.eq.ionode_id) THEN
CLOSE(epwdata)
IF (vme) THEN
CLOSE(iunvmedata)
ELSE
CLOSE(iundmedata)
ENDIF
ENDIF
!
WRITE(stdout,'(/5x,"Finished reading Wann rep data from file"/)')
!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
so it is clear that ,the read_ifc subroutine been successfully performed, but the last standard write has not been executed.
i use etf_mem == 0 in my epw.in inputfile, but i don't think it is the memory issue, because the memory in my supercomputer is large enough and the coarse grid is not that dense.
i wonder if there is any other causes that may lead to this problem.
i wish your reply and really appreciate it.
i encounter an issue when i run the latest version EPW (i.e. QE6.4.1). That is, when i use nk= 14 14 1, nq = 7 7 1, and interpolate to nkf = 30 30 1, nqf = 30 30 1, the calculation run successfully. But when i use a denser nk = 18 18 1, q = 9 9 1, and keep the interpolation unchenged(nkf = 30 30 1, nqf = 30 30 1), the code stop near the end of calculation without any CRASH ,as follow shows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Writing Hamiltonian, Dynamical matrix and EP vertex in Wann rep to file
Reading Hamiltonian, Dynamical matrix and EP vertex in Wann rep from file
Reading interatomic force constants
IFC last 0.0079579
Imposed simple ASR
Finished reading ifcs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I checked the source code, and the relevant code is as follow :
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IF (lifc) CALL read_ifc
!
IF (etf_mem == 0) then
IF (.not. ALLOCATED(epmatwp)) ALLOCATE ( epmatwp ( nbndsub, nbndsub, nrr_k, nmodes, nrr_g) )
epmatwp = czero
IF (mpime.eq.ionode_id) THEN
! SP: The call to epmatwp is now inside the loop
! This is important as otherwise the lrepmatw integer
! could become too large for integer(kind=4).
! Note that in Fortran the record length has to be a integer
! of kind 4.
lrepmatw = 2 * nbndsub * nbndsub * nrr_k * nmodes
filint = trim(prefix)//'.epmatwp'
CALL diropn (iunepmatwp, 'epmatwp', lrepmatw, exst)
DO irg = 1, nrr_g
CALL davcio ( epmatwp(:,:,:,:,irg), lrepmatw, iunepmatwp, irg, -1 )
ENDDO
!
CLOSE(iunepmatwp)
ENDIF
!
CALL mp_bcast (epmatwp, ionode_id, inter_pool_comm)
CALL mp_bcast (epmatwp, root_pool, intra_pool_comm)
!
ENDIF
!
CALL mp_barrier(inter_pool_comm)
IF (mpime.eq.ionode_id) THEN
CLOSE(epwdata)
IF (vme) THEN
CLOSE(iunvmedata)
ELSE
CLOSE(iundmedata)
ENDIF
ENDIF
!
WRITE(stdout,'(/5x,"Finished reading Wann rep data from file"/)')
!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
so it is clear that ,the read_ifc subroutine been successfully performed, but the last standard write has not been executed.
i use etf_mem == 0 in my epw.in inputfile, but i don't think it is the memory issue, because the memory in my supercomputer is large enough and the coarse grid is not that dense.
i wonder if there is any other causes that may lead to this problem.
i wish your reply and really appreciate it.