Dear Dr. Samuel
I emailed you my scf/nscf and epw files.
Thank you
Jaya
error in MPI_FILE_OPEN
Moderator: stiwari
Re: error in MPI_FILE_OPEN
I recently installed qe-6.0 on a Linux cluster using intel/16.0.1.150 and openmpi/1.8.1 (compiled with the same intel version). I got exactly the error reported on this thread. So I recompiled qe-6.0 and openmpi/1.8.1 with intel/14.0.2.144. The error is gone.
Cheers,
Vahid
Vahid Askarpour
Department of Physics and Atmospheric Science
Dalhousie University
Halifax, NS Canada
Cheers,
Vahid
Vahid Askarpour
Department of Physics and Atmospheric Science
Dalhousie University
Halifax, NS Canada
Re: error in MPI_FILE_OPEN
Dear Vahid,
Thank you for letting us know.
I recall that I saw a compiler bug in intel16. Usually, I wait until a few revisions (16.0.2 or something) before switching to a new compiler.
Best,
Samuel
Thank you for letting us know.
I recall that I saw a compiler bug in intel16. Usually, I wait until a few revisions (16.0.2 or something) before switching to a new compiler.
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Re: error in MPI_FILE_OPEN
Dear Samuel and Jaya,
The issue is observed with different no of procs from (scf,nscf) to epw calculations. I am curious to know how many maximum no of procs and nodes for high end calculations. For e.g My HPC contains 40 nodes, where each node with 48 cores and 68GB RAM memory. I am planning to run with nk (nq) with 16 *16(16*16) and nkf1(nqf1) with 64*64(64*64). How can I proceed for the calculations to avoid the memory issue?? Indeed I attempted several times (like 6, 12, 18 nodes with 256 cores) and struck with the same memory problem.
with regards
S. Appalakondaiah
The issue is observed with different no of procs from (scf,nscf) to epw calculations. I am curious to know how many maximum no of procs and nodes for high end calculations. For e.g My HPC contains 40 nodes, where each node with 48 cores and 68GB RAM memory. I am planning to run with nk (nq) with 16 *16(16*16) and nkf1(nqf1) with 64*64(64*64). How can I proceed for the calculations to avoid the memory issue?? Indeed I attempted several times (like 6, 12, 18 nodes with 256 cores) and struck with the same memory problem.
with regards
S. Appalakondaiah
Re: error in MPI_FILE_OPEN
Dear Samuel,
I have similar problem like Jaya. I tried to use the EPW v. 4.0 and 4.1.
I have outdir set to './'.
My calculations create prefix.wout correctly but not create al.epmatwp1 file and epw.out gives the error:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine diropn (115):
error opening ./prefix.epmatwp1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Please let me know how I can solve this problem.
Thank you in advance,
Artur
I have similar problem like Jaya. I tried to use the EPW v. 4.0 and 4.1.
I have outdir set to './'.
My calculations create prefix.wout correctly but not create al.epmatwp1 file and epw.out gives the error:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine diropn (115):
error opening ./prefix.epmatwp1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Please let me know how I can solve this problem.
Thank you in advance,
Artur
Re: error in MPI_FILE_OPEN
Dear Artur,
If you use EPW v4.1, it should be working. It was working for Jaya with 4.1.
However, I notice that you might have an issue with your prefix.
The prefix in EPW should be the same as in scf and nscf calculation. So you should have "al.wout" and not "prefix.wout" I think.
Best,
Samuel
If you use EPW v4.1, it should be working. It was working for Jaya with 4.1.
However, I notice that you might have an issue with your prefix.
The prefix in EPW should be the same as in scf and nscf calculation. So you should have "al.wout" and not "prefix.wout" I think.
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Re: error in MPI_FILE_OPEN
Dear Samuel,
now a have the following situation with this same issue.
When I run the following calculations for phonons:
scf run for 16x16x16 K_POINTS; ph run for nq1=6, nq2=6, nq3=6
and the following calculations for EPW:
scf run for 8x8x8 K_POINTS; nscf run for 216 K_POINTS crystal; epw run for nk1=nk2=nk3=6, nq1=nq2=nq3=6, nqf1=nqf2=nqf3=24, nkf1=nkf2=nkf3=24, 16 cartesian
Everything is OK, I am getting good results.
But when I try increase the accuracy in the following way for phonons:
scf run for 16x16x16 K_POINTS; ph run for nq1=8, nq2=8, nq3=8
and for EPW calculations:
scf run for 8x8x8 K_POINTS; nscf run for 512 K_POINTS crystal; epw run for nk1=nk2=nk3=8, nq1=nq2=nq3=8, nqf1=nqf2=nqf3=64, nkf1=nkf2=nkf3=64, 29 cartesian
I getting the same error as the last:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
task # 0
from davcio : error # 115
error while writing from file "./prefix.epmatwp1"
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Do you have any idea what I'm doing wrong?
Best Regards,
Artur
now a have the following situation with this same issue.
When I run the following calculations for phonons:
scf run for 16x16x16 K_POINTS; ph run for nq1=6, nq2=6, nq3=6
and the following calculations for EPW:
scf run for 8x8x8 K_POINTS; nscf run for 216 K_POINTS crystal; epw run for nk1=nk2=nk3=6, nq1=nq2=nq3=6, nqf1=nqf2=nqf3=24, nkf1=nkf2=nkf3=24, 16 cartesian
Everything is OK, I am getting good results.
But when I try increase the accuracy in the following way for phonons:
scf run for 16x16x16 K_POINTS; ph run for nq1=8, nq2=8, nq3=8
and for EPW calculations:
scf run for 8x8x8 K_POINTS; nscf run for 512 K_POINTS crystal; epw run for nk1=nk2=nk3=8, nq1=nq2=nq3=8, nqf1=nqf2=nqf3=64, nkf1=nkf2=nkf3=64, 29 cartesian
I getting the same error as the last:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
task # 0
from davcio : error # 115
error while writing from file "./prefix.epmatwp1"
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Do you have any idea what I'm doing wrong?
Best Regards,
Artur
Re: error in MPI_FILE_OPEN
Dear Artur,
It's likely due to a memory issue (not enough RAM).
Are you using the etf_mem = false ?
What is the size of your dvscf files on the 8x8x8 q-grid?
What is the max mem/cores that you have available on your cluster?
How many bands to you have in your system?
Best,
Samuel
It's likely due to a memory issue (not enough RAM).
Are you using the etf_mem = false ?
What is the size of your dvscf files on the 8x8x8 q-grid?
What is the max mem/cores that you have available on your cluster?
How many bands to you have in your system?
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Re: error in MPI_FILE_OPEN
Dear Samuel,
thank you for your response. But I think that it isn't problem with memory. More than half of the RAM is not used during the epw calculations. On my cluster for the test calculations I have installed 64 GB of RAM.
I am not using the etf_mem = false.
The size of each dvscf file is 3.6 MiB.
During my calculations I use -np 10 and -npool 10.
I have 8 bands in my system.
Do you have any suggestion?
Best Regards,
Artur
thank you for your response. But I think that it isn't problem with memory. More than half of the RAM is not used during the epw calculations. On my cluster for the test calculations I have installed 64 GB of RAM.
I am not using the etf_mem = false.
The size of each dvscf file is 3.6 MiB.
During my calculations I use -np 10 and -npool 10.
I have 8 bands in my system.
Do you have any suggestion?
Best Regards,
Artur
Re: error in MPI_FILE_OPEN
Dear Artur,
Can you try again (restart from a fresh calculation in a new directory) with etf_mem = .false.
The way the file are created and read using MPI is different than with etf_mem = .true.
This could help you.
Also make sure to use version 4.1 of EPW.
Best,
Samuel
Can you try again (restart from a fresh calculation in a new directory) with etf_mem = .false.
The way the file are created and read using MPI is different than with etf_mem = .true.
This could help you.
Also make sure to use version 4.1 of EPW.
Best,
Samuel
Prof. Samuel Poncé
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com
Chercheur qualifié F.R.S.-FNRS / Professeur UCLouvain
Institute of Condensed Matter and Nanosciences
UCLouvain, Belgium
Web: https://www.samuelponce.com