Error in routine read_ephmat (1): Error allocating g2

Post here questions linked with issue while running the EPW code

Moderator: stiwari

Post Reply
hou

Error in routine read_ephmat (1): Error allocating g2

Post by hou »

A runtime error occurs when I calculate the superconducting gap using epw.v5.4. I encounter this error when I use nk=32*32*32 and nq=16*16*16 to calculate it, while using nk=32*32*32nq=8*8*8 does not, I need to use a denser interpolation to solve superconducting gap , how can I fix this error?

epw.in
--
&inputepw
prefix = 'XX',
amass(1) = X,
amass(2) = X,
outdir = './'
dvscf_dir = './save'

ep_coupling = .true. ! If .true. run e-ph coupling calculation.
elph = .true. ! If .true. calculate e-ph coefficients.

epwwrite = .true.
epwread = .false.

max_memlt = 15

etf_mem = 1

fermi_plot = .true. ! If .true., write Fermi surface files (in .cube format with VESTA)

wannierize = .true.
nbndsub = 104,
bands_skipped = 'exclude_bands = 111:140'
num_iter = 1000
dis_froz_min= 1
dis_froz_max= 12.5
proj(1) = 'B:s;p'
wdata(1) = 'num_bands = 110'
wdata(2) = 'dis_num_iter = 1000'
wdata(3) = 'dis_win_min = -8.8'
wdata(4) = 'dis_win_max = 25'
wdata(5) = 'bands_plot = .true.'
wdata(6) = 'begin kpoint_path'
wdata(7) = 'X 0.500 -0.500 0.500 G 0.000 0.000 0.000'
wdata(8) = 'G 0.000 0.000 0.000 R 0.000 0.500 0.000'
wdata(9) = 'R 0.000 0.500 0.000 W 0.250 0.250 0.250'
wdata(10) = 'W 0.250 0.250 0.250 S 0.500 0.000 0.000'
wdata(11) = 'S 0.500 0.000 0.000 G 0.000 0.000 0.000'
wdata(12) = 'G 0.000 0.000 0.000 T 0.000 0.000 0.500'
wdata(13) = 'T 0.000 0.000 0.500 W 0.250 0.250 0.250'
wdata(14) = 'end kpoint_path'
wdata(15) = 'bands_plot_format = gnuplot'


iverbosity = 2

ephwrite = .true. ! Writes .ephmat files used when Eliasberg = .true.

fsthick = 0.5 ! eV
degaussw = 0.05 ! eV
degaussq = 0.05 ! meV

eliashberg = .true. !If .true. solve the Eliashberg equations and/or calculate the Eliashberg

laniso = .true.
limag = .true.
lpade = .true.

npade = 40

conv_thr_iaxis = 1.0d-3

wscut = 0.5 ! eV Upper limit over frequency integration/summation in the Elisashberg eq

nstemp = 1 ! Nr. of temps
temps = 40 ! K provide list of temperetures OR (nstemp and temps = tempsmin tempsmax for even space mode)

nsiter = 800

muc = 0.10

nk1 = 4
nk2 = 4
nk3 = 4

nq1 = 4
nq2 = 4
nq3 = 4

mp_mesh_k = .true.
nkf1 = 32
nkf2 = 32
nkf3 = 32

nqf1 = 16
nqf2 = 16
nqf3 = 16
/


epw.out

Band disentanglement is used: nbndsub = 104
Use zone-centred Wigner-Seitz cells
Number of WS vectors for electrons 79
Number of WS vectors for phonons 79
Number of WS vectors for electron-phonon 79
Maximum number of cores for efficient parallelization 7110
Results may improve by using use_ws == .TRUE.

Velocity matrix elements calculated


Bloch2wane: 1 / 64
Bloch2wane: 2 / 64
Bloch2wane: 3 / 64
Bloch2wane: 4 / 64
Bloch2wane: 5 / 64
Bloch2wane: 6 / 64
Bloch2wane: 7 / 64
Bloch2wane: 8 / 64
Bloch2wane: 9 / 64
Bloch2wane: 10 / 64
Bloch2wane: 11 / 64
Bloch2wane: 12 / 64
Bloch2wane: 13 / 64
Bloch2wane: 14 / 64
Bloch2wane: 15 / 64
Bloch2wane: 16 / 64
Bloch2wane: 17 / 64
Bloch2wane: 18 / 64
Bloch2wane: 19 / 64
Bloch2wane: 20 / 64
Bloch2wane: 21 / 64
Bloch2wane: 22 / 64
Bloch2wane: 23 / 64
Bloch2wane: 24 / 64
Bloch2wane: 25 / 64
Bloch2wane: 26 / 64
Bloch2wane: 27 / 64
Bloch2wane: 28 / 64
Bloch2wane: 29 / 64
Bloch2wane: 30 / 64
Bloch2wane: 31 / 64
Bloch2wane: 32 / 64
Bloch2wane: 33 / 64
Bloch2wane: 34 / 64
Bloch2wane: 35 / 64
Bloch2wane: 36 / 64
Bloch2wane: 37 / 64
Bloch2wane: 38 / 64
Bloch2wane: 39 / 64
Bloch2wane: 40 / 64
Bloch2wane: 41 / 64
Bloch2wane: 42 / 64
Bloch2wane: 43 / 64
Bloch2wane: 44 / 64
Bloch2wane: 45 / 64
Bloch2wane: 46 / 64
Bloch2wane: 47 / 64
Bloch2wane: 48 / 64
Bloch2wane: 49 / 64
Bloch2wane: 50 / 64
Bloch2wane: 51 / 64
Bloch2wane: 52 / 64
Bloch2wane: 53 / 64
Bloch2wane: 54 / 64
Bloch2wane: 55 / 64
Bloch2wane: 56 / 64
Bloch2wane: 57 / 64
Bloch2wane: 58 / 64
Bloch2wane: 59 / 64
Bloch2wane: 60 / 64
Bloch2wane: 61 / 64
Bloch2wane: 62 / 64
Bloch2wane: 63 / 64
Bloch2wane: 64 / 64

Bloch2wanp: 1 / 2
Bloch2wanp: 2 / 2

Writing Hamiltonian, Dynamical matrix and EP vertex in Wann rep to file

===================================================================
Memory usage: VmHWM = 6472Mb
VmPeak = 7217Mb
===================================================================

Using uniform q-mesh: 16 16 16
Size of q point mesh for interpolation: 4096
Using uniform MP k-mesh: 32 32 32
Size of k point mesh for interpolation: 9010
Max number of k points per pool: 188

Fermi energy coarse grid = 6.273208 eV

Fermi energy is calculated from the fine k-mesh: Ef = 6.243891 eV

===================================================================

ibndmin = 43 ebndmin = 5.744 eV
ibndmax = 48 ebndmax = 6.743 eV


Number of ep-matrix elements per pool : 304560 ~= 2.32 Mb (@ 8 bytes/ DP)
Number selected, total 100 100
Number selected, total 200 200
Number selected, total 300 300
Number selected, total 400 400
Number selected, total 500 500
Number selected, total 600 600
Number selected, total 700 700
Number selected, total 800 800
Number selected, total 900 900
Number selected, total 1000 1000
Number selected, total 1100 1100
Number selected, total 1200 1200
Number selected, total 1300 1300
Number selected, total 1400 1400
Number selected, total 1500 1500
Number selected, total 1600 1600
Number selected, total 1700 1700
Number selected, total 1800 1800
Number selected, total 1900 1900
Number selected, total 2000 2000
Number selected, total 2100 2100
Number selected, total 2200 2200
Number selected, total 2300 2300
Number selected, total 2400 2400
Number selected, total 2500 2500
Number selected, total 2600 2600
Number selected, total 2700 2700
Number selected, total 2800 2800
Number selected, total 2900 2900
Number selected, total 3000 3000
Number selected, total 3100 3100
Number selected, total 3200 3200
Number selected, total 3300 3300
Number selected, total 3400 3400
Number selected, total 3500 3500
Number selected, total 3600 3600
Number selected, total 3700 3700
Number selected, total 3800 3800
Number selected, total 3900 3900
Number selected, total 4000 4000
We only need to compute 4096 q-points


Nr. of irreducible k-points on the uniform grid: 4505


Finish mapping k+sign*q onto the fine irreducibe k-mesh and writing .ikmap file


Nr irreducible k-points within the Fermi shell = 4505 out of 4505

Progression iq (fine) = 100/ 4096
Progression iq (fine) = 200/ 4096
Progression iq (fine) = 300/ 4096
Progression iq (fine) = 400/ 4096
Progression iq (fine) = 500/ 4096
Progression iq (fine) = 600/ 4096
Progression iq (fine) = 700/ 4096
Progression iq (fine) = 800/ 4096
Progression iq (fine) = 900/ 4096
Progression iq (fine) = 1000/ 4096
Progression iq (fine) = 1100/ 4096
Progression iq (fine) = 1200/ 4096
Progression iq (fine) = 1300/ 4096
Progression iq (fine) = 1400/ 4096
Progression iq (fine) = 1500/ 4096
Progression iq (fine) = 1600/ 4096
Progression iq (fine) = 1700/ 4096
Progression iq (fine) = 1800/ 4096
Progression iq (fine) = 1900/ 4096
Progression iq (fine) = 2000/ 4096
Progression iq (fine) = 2100/ 4096
Progression iq (fine) = 2200/ 4096
Progression iq (fine) = 2300/ 4096
Progression iq (fine) = 2400/ 4096
Progression iq (fine) = 2500/ 4096
Progression iq (fine) = 2600/ 4096
Progression iq (fine) = 2700/ 4096
Progression iq (fine) = 2800/ 4096
Progression iq (fine) = 2900/ 4096
Progression iq (fine) = 3000/ 4096
Progression iq (fine) = 3100/ 4096
Progression iq (fine) = 3200/ 4096
Progression iq (fine) = 3300/ 4096
Progression iq (fine) = 3400/ 4096
Progression iq (fine) = 3500/ 4096
Progression iq (fine) = 3600/ 4096
Progression iq (fine) = 3700/ 4096
Progression iq (fine) = 3800/ 4096
Progression iq (fine) = 3900/ 4096
Progression iq (fine) = 4000/ 4096
Fermi level (eV) = 0.624389061127495D+01
DOS(states/spin/eV/Unit Cell) = 0.383575213922023D+01
Electron smearing (eV) = 0.500000000000000D-01
Fermi window (eV) = 0.500000000000000D+00

Finish writing .ephmat files

===================================================================
Memory usage: VmHWM = 6472Mb
VmPeak = 7217Mb
===================================================================

Fermi surface calculation on fine mesh
Fermi level (eV) = 6.243891
6 bands within the Fermi window


===================================================================
Solve anisotropic Eliashberg equations
===================================================================


Finish reading freq file

Fermi level (eV) = 6.2438906113E+00
DOS(states/spin/eV/Unit Cell) = 3.8357521392E+00
Electron smearing (eV) = 5.0000000000E-02
Fermi window (eV) = 5.0000000000E-01
Nr irreducible k-points within the Fermi shell = 4505 out of 4505

6 bands within the Fermi window


Finish reading egnv file


Max nr of q-points = 4096


Finish reading ikmap files


Size of allocated memory per pool: ~= 9.4353 Gb

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine read_ephmat (1):
Error allocating g2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

stopping ...

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine read_ephmat (1):
Error allocating g2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

stopping ...

job.sh

#!/bin/bash
#SBATCH -J pwscf
#SBATCH -N 1
#SBATCH -n 48
#SBATCH -p max
#SBATCH --error=%J.err
#SBATCH --output=%J.out
#SBATCH --reservation=baij_18
module load intel/2017u5
mpirun /data/software/qe/qe6.8/bin/pw.x -npool 12 < scf.in > scf.out
mpirun /data/software/qe/qe6.8/bin/pw.x -npool 12 < nscf.in > nscf.out
mpirun /data/software/qe/qe6.8/bin/epw.x -npool 48 < epw.in > epw.out
gkafle1
Posts: 31
Joined: Wed Jun 17, 2020 8:55 pm
Affiliation: Binghamton University

Re: Error in routine read_ephmat (1): Error allocating g2

Post by gkafle1 »

Hi hou,

This is a memory problem. You require 9.4353 Gb per pool for your calculation. Please increase the memory as required in your computing facility. You can submit the jobs in a partition having large memory or increase the number of nodes.

Thanks!

Gyanu
hou

Re: Error in routine read_ephmat (1): Error allocating g2

Post by hou »

Dear Gyanu,

Thank you very much for your reply. I have increased the number of nodes as your suggestion and successfully resolved the above error. however, when I continue to run, I encountered another error, which seems also to be the memory problem, but it only requires 4.249Gb per pool. The number of nodes that I am currently using should satisfy the memory requirement. What is the reason for this error and how can I fix it?

job.sh

#!/bin/bash
#SBATCH -J pwscf
#SBATCH -N 6
#SBATCH -n 64
#SBATCH -p own
#SBATCH --error=%J.err
#SBATCH --output=%J.out
#SBATCH --reservation=baij_17
module load intel/2017u5
#mpirun /data/home/houjy/qe6.8/bin/pw.x -npool 12 < scf.in > scf.out
#mpirun /data/home/houjy/qe6.8/bin/pw.x -npool 12 < nscf.in > nscf.out
mpirun /data/home/houjy/qe6.8/bin/epw.x -npool 64 <epw.in> epw.out
scontrol show job $SLURM_JOBID

Each own node has about 734 Gb of memory.

epw.out


Finish writing .ephmat files

===================================================================
Memory usage: VmHWM = 5421Mb
VmPeak = 6241Mb
===================================================================

Fermi surface calculation on fine mesh
Fermi level (eV) = 6.245248
6 bands within the Fermi window


===================================================================
Solve anisotropic Eliashberg equations
===================================================================


Finish reading freq file

Fermi level (eV) = 6.2452482117E+00
DOS(states/spin/eV/Unit Cell) = 3.8229666774E+00
Electron smearing (eV) = 5.0000000000E-02
Fermi window (eV) = 5.0000000000E-01
Nr irreducible k-points within the Fermi shell = 4505 out of 4505

6 bands within the Fermi window


Finish reading egnv file


Max nr of q-points = 4096


Finish reading ikmap files


Start reading .ephmat files


Finish reading .ephmat files

Electron-phonon coupling strength = 0.4222770

Estimated Allen-Dynes Tc = 22.859198 K for muc = 0.10000

Estimated w_log in Allen-Dynes Tc = 30.449699 meV

Estimated BCS superconducting gap = 6.500243 meV


WARNING WARNING WARNING

The code may crash since tempsmax = 80.000 K is larger than Allen-Dynes Tc = 22.859 K

temp( 1) = 5.00000 K

Solve anisotropic Eliashberg equations on imaginary-axis

Total number of frequency points nsiw( 1) = 185
Cutoff frequency wscut = 0.5022


Size of allocated memory per pool: ~= 4.2490 Gb

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine kernel_aniso_iaxis (1):
Error allocating akeri
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

stopping ...

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine kernel_aniso_iaxis (1):
Error allocating akeri
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
gkafle1
Posts: 31
Joined: Wed Jun 17, 2020 8:55 pm
Affiliation: Binghamton University

Re: Error in routine read_ephmat (1): Error allocating g2

Post by gkafle1 »

Hi Hou,

Again, it looks like a memory issue. Can you please increase the number of nodes even a little bit higher?

Thanks!

Gyanu
Post Reply