error in MPI_FILE_SET_VIEW

Post here questions linked with issue while running the EPW code

Moderator: stiwari

Post Reply
huebener

error in MPI_FILE_SET_VIEW

Post by huebener »

Hi,

I am running EPW v.4.0.0 with etf_mem = .false. and encounter the following CRASH message:

Code: Select all

     task #         9
     from ephwan2blochp : error #  1
     error in MPI_FILE_SET_VIEW


The error doesn't seem to happen on task 0, though. I can run a smaller job with identical settings without problems. See below my input files for bulk WSe2.
The MPI installation is intel MPI 5.1.1 running on a Cray cluster with Intel Xeon processors.

Any help would be much appreciated.

Cheers,
Hannes

My EPW input is:

&inputepw
prefix = 'wse'
amass(1) = 183.84
amass(2) = 78.96
outdir = './'

iverbosity = 0
elph = .true.
ep_coupling = .true.
epbwrite = .true.
! epbread = .true.

epwwrite = .true.
! epwread = .true.
! kmaps = .true.

nbndsub = 22
nbndskip = 0

wannierize = .true.
num_iter = 500
proj(1) = 'Se:p'
proj(2) = 'W:d'

etf_mem = .false.
elinterp = .true.
phinterp = .true.

tshuffle2 = .true.

elecselfen = .true.
phonselfen = .false.
a2f = .false.

parallel_k = .true.
parallel_q = .false.

eptemp = 300
degaussw = 0.1 ! eV

dvscf_dir = '../phonons_8_8_2/save'
band_plot = .true.
filukk = './wse.ukk'
! filqf = './meshes/path.dat'
filkf = './meshes/path.dat'
filelph = './filelph'
nkf1 = 100
nkf2 = 100
nkf3 = 1

nk1 = 8
nk2 = 8
nk3 = 2

nqf1 = 80
nqf2 = 80
nqf3 = 20

nq1 = 8
nq2 = 8
nq3 = 2
/
20 cartesian
0.0000000 0.0000000 0.0000000 0.0156250
0.0000000 0.0000000 -0.1264255 0.0156250
0.0000000 0.1443376 0.0000000 0.0937500
0.0000000 0.1443376 -0.1264255 0.0937500
0.0000000 0.2886751 0.0000000 0.0937500
0.0000000 0.2886751 -0.1264255 0.0937500
0.0000000 0.4330127 0.0000000 0.0937500
0.0000000 0.4330127 -0.1264255 0.0937500
0.0000000 -0.5773503 0.0000000 0.0468750
0.0000000 -0.5773503 -0.1264255 0.0468750
0.1250000 0.2165064 0.0000000 0.0937500
0.1250000 0.2165064 -0.1264255 0.0937500
0.1250000 0.3608439 0.0000000 0.1875000
0.1250000 0.3608439 -0.1264255 0.1875000
0.1250000 0.5051815 0.0000000 0.1875000
0.1250000 0.5051815 -0.1264255 0.1875000
0.2500000 0.4330127 0.0000000 0.0937500
0.2500000 0.4330127 -0.1264255 0.0937500
0.2500000 0.5773503 0.0000000 0.0937500
0.2500000 0.5773503 -0.1264255 0.0937500

The underlying scf calculation fir bulk WSe2 is done with:

&CONTROL
title = ' wse ',
calculation = 'scf',
prefix = 'wse'
pseudo_dir = '/u/hhueb/QE_PSEUDOS/'
outdir = '.'
disk_io = 'low'
wf_collect= .true.
/

&SYSTEM
ecutwfc = 100.
ibrav = 4,
celldm(1)=6.202080695
celldm(3)=3.954898558
nat = 6,
ntyp = 2,
/

&ELECTRONS
diagonalization='david'
mixing_mode = 'plain'
mixing_beta = 0.7
conv_thr = 1.0d-10
/

K_POINTS automatic
8 8 2 0 0 0

ATOMIC_SPECIES
W 183.84 W.pbe-hgh.UPF
Se 78.96 Se.pbe-hgh.UPF

ATOMIC_POSITIONS (crystal)
W 1/3 2/3 1/4
W -1/3 -2/3 -1/4
Se 1/3 2/3 0.621
Se -1/3 -2/3 -0.621
Se 2/3 1/3 1.121
Se -2/3 -1/3 -1.121
sponce
Site Admin
Posts: 616
Joined: Wed Jan 13, 2016 7:25 pm
Affiliation: EPFL

Re: error in MPI_FILE_SET_VIEW

Post by sponce »

Hello Hannes,

I usually have to fight when it comes to impi.

The easiest solution (if its possible on your cluster) would be to re-compile with openmpi.

In the meantime I'll take a look as one of the bot on the testfarm has impi 5.1.

Best,

Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
huebener

Re: error in MPI_FILE_SET_VIEW

Post by huebener »

Hi Samuel,

I now tried using a built with openmpi and gfortran and the error persists. Did you find any hint on the test farm what might be the problem?

Would it be a viable strategy to use etf_mem = .true. and increase the memory per core, or is this a hopeless scaling?

Cheers and thank you very much for your help,
Hannes
sponce
Site Admin
Posts: 616
Joined: Wed Jan 13, 2016 7:25 pm
Affiliation: EPFL

Re: error in MPI_FILE_SET_VIEW

Post by sponce »

Dear Hannes,

Are you using EPW from the QE-5.4.0 ?
If so, it should definitely work with gfortran and openmpi.

Since that version, I optimized that routine a bit:

Code: Select all

  !---------------------------------------------------------------------------
  subroutine ephwan2blochp ( nmodes, xxq, irvec, ndegen, nrr_q, cuf, epmatf, nbnd, nrr_k )
  !---------------------------------------------------------------------------
  !
  ! even though this is for phonons, I use the same notations
  ! adopted for the electronic case (nmodes->nmodes etc)
  !
  USE kinds,         only : DP
  USE epwcom,        only : parallel_k, parallel_q, etf_mem
  USE io_epw,        only : iunepmatwp
  USE elph2,         only : epmatwp
  USE constants_epw, ONLY : twopi, ci, czero
  USE io_global,     ONLY : ionode
  USE io_files,      ONLY : prefix, tmp_dir
#ifdef __PARA
  USE mp_global,     ONLY : inter_pool_comm, intra_pool_comm, mp_sum
  USE mp_world,      ONLY : world_comm
  USE parallel_include
#endif
  implicit none
  !
  !  input variables
  !
  integer :: nmodes, nrr_q, irvec ( 3, nrr_q), ndegen (nrr_q), nbnd, nrr_k
  ! number of bands (possibly in tyhe optimal subspace)
  ! number of WS points
  ! coordinates of WS points
  ! degeneracy of WS points
  ! n of bands
  ! n of electronic WS points
  complex(kind=DP), allocatable :: epmatw ( :,:,:,:)
  complex(kind=DP) :: cuf (nmodes, nmodes) 
  ! e-p matrix in Wanner representation
  ! rotation matrix U(k)
  real(kind=DP) :: xxq(3)
  ! kpoint for the interpolation (WARNING: this must be in crystal coord!)
  !
  !  output variables
  !
  complex(kind=DP) :: epmatf (nbnd, nbnd, nrr_k, nmodes)
  ! e-p matrix in Bloch representation, fine grid
  !
  ! work variables
  !
  character (len=256) :: filint
  character (len=256) :: string
  logical :: exst
  integer :: ibnd, jbnd, ir, ire, ir_start, ir_stop, imode,iunepmatwp2,ierr, i
  integer ::  ip , test !, my_id
  integer(kind=8) ::  lrepmatw,  lrepmatw2
  real(kind=DP) :: rdotk
  complex(kind=DP) :: eptmp( nbnd, nbnd, nrr_k, nmodes)
  complex(kind=DP) :: cfac(nrr_q)
  complex(kind=DP):: aux( nbnd*nbnd*nrr_k*nmodes )
  !
  CALL start_clock('ephW2Bp')
  !----------------------------------------------------------
  !  STEP 3: inverse Fourier transform of g to fine k mesh
  !----------------------------------------------------------
  !
  !  g~ (k') = sum_R 1/ndegen(R) e^{-ik'R} g (R)
  !
  !  g~(k') is epmatf (nmodes, nmodes, ik )
  !  every pool works with its own subset of k points on the fine grid
  !
  IF (parallel_k) THEN
     CALL para_bounds(ir_start, ir_stop, nrr_q)
  ELSEIF (parallel_q) THEN
     ir_start = 1
     ir_stop  = nrr_q
  ELSE
     CALL errore ('ephwan2blochp', 'Problem with parallel_k/q scheme', nrr_q)
  ENDIF
  !
#ifdef __PARA
  IF (.NOT. etf_mem) then
    ! Check for directory given by "outdir"
    !     
    filint = trim(tmp_dir)//trim(prefix)//'.epmatwp1'
    CALL MPI_FILE_OPEN(world_comm,filint,MPI_MODE_RDONLY,MPI_INFO_NULL,iunepmatwp2,ierr)
    IF( ierr /= 0 ) CALL errore( 'ephwan2blochp', 'error in MPI_FILE_OPEN',1 )
    IF( parallel_q ) CALL errore( 'ephwan2blochp', 'q-parallel+etf_mem=.false. is not supported',1 )
    !CALL MPI_COMM_RANK(world_comm,my_id,ierr)
  ENDIF
#endif
  !
  eptmp = czero
  cfac(:) = czero
  !
  DO ir = ir_start, ir_stop
     !   
     ! note xxq is assumed to be already in cryst coord
     !
     rdotk = twopi * dot_product ( xxq, dble(irvec(:, ir)) )
     cfac(ir) = exp( ci*rdotk ) / dble( ndegen(ir) )
  ENDDO
  !
  IF (etf_mem) then
    !DO ir = ir_start, ir_stop
    !  eptmp(:,:,:,:) = eptmp(:,:,:,:) +&
    !    cfac(ir)*epmatwp( :, :, :, :, ir)
    !ENDDO
    ! SP: This is faster by 20 %
    Call zgemv( 'n',  nbnd * nbnd * nrr_k * nmodes, ir_stop - ir_start + 1, ( 1.d0, 0.d0 ),&
             epmatwp(1,1,1,1,ir_start), nbnd * nbnd * nrr_k * nmodes, cfac(ir_start),1,( 0.d0, 0.d0),eptmp, 1 )   
    !
  ELSE
    !
    ALLOCATE(epmatw ( nbnd, nbnd, nrr_k, nmodes))
    !
    lrepmatw2   = 2 * nbnd * nbnd * nrr_k * nmodes
    !
    DO ir = ir_start, ir_stop
#ifdef __PARA
      ! DEBUG: print*,'Process ',my_id,' do ',ir,'/ ',ir_stop
      !
      !  Direct read of epmatwp for this ir
      lrepmatw   = 2 * nbnd * nbnd * nrr_k * nmodes * 8 * (ir-1)
      ! SP: mpi seek is used to set the position at which we should start
      ! reading the file. It is given in bits.
      ! Note : The process can be collective (=blocking) if using MPI_FILE_SET_VIEW & MPI_FILE_READ_ALL
      !        or noncollective (=non blocking) if using MPI_FILE_SEEK & MPI_FILE_READ.
      !        Here we want non blocking because not all the process have the same nb of ir.
      !
      CALL MPI_FILE_SEEK(iunepmatwp2,lrepmatw,MPI_SEEK_SET,ierr)
      IF( ierr /= 0 ) CALL errore( 'ephwan2blochp', 'error in MPI_FILE_SEEK',1 )
      !CALL MPI_FILE_READ(iunepmatwp2, aux, lrepmatw2, MPI_DOUBLE_PRECISION, MPI_STATUS_IGNORE,ierr)
      CALL MPI_FILE_READ(iunepmatwp2, epmatw, lrepmatw2, MPI_DOUBLE_PRECISION, MPI_STATUS_IGNORE,ierr)
      IF( ierr /= 0 ) CALL errore( 'ephwan2blochp', 'error in MPI_FILE_READ_ALL',1 )
      !

#else     
      call rwepmatw ( epmatw, nbnd, nrr_k, nmodes, ir, iunepmatwp, -1)
#endif
      !
      !eptmp = eptmp + cfac(ir)*epmatw
      CALL ZAXPY(nbnd * nbnd * nrr_k * nmodes, cfac(ir), epmatw, 1, eptmp, 1)
      !
    ENDDO
    DEALLOCATE(epmatw)
  ENDIF
  !
#ifdef __PARA
  IF (parallel_k) CALL mp_sum(eptmp, world_comm)
  IF (.NOT. etf_mem) then
    CALL MPI_FILE_CLOSE(iunepmatwp2,ierr)
    IF( ierr /= 0 ) CALL errore( 'ephwan2blochp', 'error in MPI_FILE_CLOSE',1 )
  ENDIF 
#endif
  !
  !----------------------------------------------------------
  !  STEP 4: un-rotate to Bloch space, fine grid
  !----------------------------------------------------------
  !
  ! epmatf(j) = sum_i eptmp(i) * uf(i,j)
  !
  Call zgemm( 'n', 'n', nbnd * nbnd * nrr_k, nmodes, nmodes, ( 1.d0, 0.d0 ),eptmp  , nbnd * nbnd * nrr_k, &
                                                                                    cuf, nmodes         , &
                                                             ( 0.d0, 0.d0 ),epmatf, nbnd * nbnd * nrr_k )

  !
  CALL stop_clock('ephW2Bp')
  !
  end subroutine ephwan2blochp



This should be faster but should not solve your problem.

You can go with etf_mem true but depending on your system size, it might be memory demanding.

Best,

Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
huebener

Re: error in MPI_FILE_SET_VIEW

Post by huebener »

Hi Samuel,

Thank you for the optimised routine. Unsurprisingly the problem persists even when I do use gfortran and openmpi. Note that the problem occurs for large calculations, but work fine for smaller ones (where it is not needed).

Another question related to this: When using etf_mem = .false. the code is slower and in many cases does not finish in 24h. Since everything is written to file, I guess it should be possible to restart the loop over q-points (the one that writes "Progression iq (fine) = ..."), however the code doesn't do that automatically. Is there such an option?

Cheers and thank you very much for your help.

Best,
Hannes
sponce
Site Admin
Posts: 616
Joined: Wed Jan 13, 2016 7:25 pm
Affiliation: EPFL

Re: error in MPI_FILE_SET_VIEW

Post by sponce »

Hello Hannes,

If you can avoid it (i.e. if you have enough memory per node) it is best to use etf_mem = .true. indeed.

There is currently no such restart point in the q-loop.

What you can do however is to do calculations with smaller q-grids and average the results.
You can for example use random q-points generations with http://epw.org.uk/Documentation/Inputs#rand_q

However, be careful as not all physical quantities can be averaged. If you have a sum over q in the numerator, then it should be fine to do two calculations with half the number of q-points and take the average.
You can test this by using http://epw.org.uk/Documentation/Inputs#filqf
For example you provide 100 q-points and you do the calculations.
Then you do two calculations with the first 50 and the last 50 q-points.
You will see that the average of the two calculations should give you exactly the same result as the one with 100 q-points.

Hope that helps,

Best,

Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
huebener

Re: error in MPI_FILE_SET_VIEW

Post by huebener »

Hi Samuel,

coming back to the original run time error:
I was able to make it go away (not to solve it) by using impi (5.1.1) together with ifort (or rather mpiifort that seems to be shipped with impi) . The variant you suggested of using openmpi+gfortran didn't work for me, though I have been able to test it only on one cluster.

Thanks,
Hannes
sponce
Site Admin
Posts: 616
Joined: Wed Jan 13, 2016 7:25 pm
Affiliation: EPFL

Re: error in MPI_FILE_SET_VIEW

Post by sponce »

Thank you for letting us know Hannes.

I will be trying to free some time this week to look at this issue.

Best,

Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Post Reply