Dear all,
My compound has 32 atoms per unit cell. In my calculation the coarse k- and q-meshes are 4x4x4.
64 MLWFs is used. I have tried to use less MLWFs, but failed. The parameter eft_mem is set to false.
The epb file can be written to disk successfully. But the "segmentation fault" error always appears
despite of how many memory is used. Version 4.2 of epw is used.
For example,
(1) memory 15 GB/pool, the last sentences in epw.out is
" The .epb files have been correctly written
band disentanglement is used: nbndsub = 64",
(2) memory 64 GB/pool, the last sentences in epw.out is
" band disentanglement is used: nbndsub = 64
Writing Hamiltonian, Dynamical matrix and EP vertex in Wann rep to file".
Does this indicate there is not enough memory for epw.x? Could you please give me some advices?
By the way, how to estimate the memory used by epw.x?
Any help will be highly appriciated! Thanks in advance!
Miao Gao
Segmentation fault for large unit cell
Moderator: stiwari
Re: Segmentation fault for large unit cell
Dear Miao Gao,
From what you write, the X.epmatwp1 file as been produced. What is the size of that file?
Even if the file is large, this should not be an issue. You can restart the calculation by reading the epwread = .true. with etf_mem = .false.
In that case, part of the file will be read depending on the number of cores you have.
If you X.epmatwp1 is 20 Gb and you have 20 cores, then each core will read 1Gb of it. Note that you still have to store other quantities so in practice it should be more around 1.5-2Gb.
So the solution is to use more nodes and/or more nodes with more memory.
You can always log on the nodes during runtime to see how much they consume.
Best,
Samuel
From what you write, the X.epmatwp1 file as been produced. What is the size of that file?
Even if the file is large, this should not be an issue. You can restart the calculation by reading the epwread = .true. with etf_mem = .false.
In that case, part of the file will be read depending on the number of cores you have.
If you X.epmatwp1 is 20 Gb and you have 20 cores, then each core will read 1Gb of it. Note that you still have to store other quantities so in practice it should be more around 1.5-2Gb.
So the solution is to use more nodes and/or more nodes with more memory.
You can always log on the nodes during runtime to see how much they consume.
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com