You currently have "max_memlt = 18.0d0" specified in the input, meaning this is the maximum amount of memory can be allocated per pool. Is this value set to the maximum available memory per core in the cluster you are using? The code crashed because the memory requirement is approximately 172 GB per pool, which exceeds the allocation. I recommend verifying the usable memory on your system and adjusting the "max_memlt" accordingly. Additionally, you may consider reducing some parameters, such as k/q meshes, fsthick, and wscut, to optimize memory usage.
Yes, the maximum memory available per core in my system is 18GB. I can't use more than that. I will check the calculations reducing the parameters you suggested. However, is there any other parameter which can reduce the memory usage, such as, `etf_mem`?
You can try removing the max_memlt tag from the input. It will run kernel vertex calculation on the fly instead of reading every time from the file. Give it a try if it solves your issue.
Second, etf_mem=1 is default which uses less memory and is used for interpolation not for solving Eliashberg equations. So it will not be of help in your case.
Regards,
Shashi
Last edited by Shashi on Thu Feb 27, 2025 10:41 pm, edited 1 time in total.
stopping ...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Error in routine mem_size_eliashberg (1):
Size of required memory exceeds max_memlt
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
stopping ...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I reduced k points `nkf` to reduce the memory required per pool but it still requires 35GB (per pool). And as I mentioned earlier, my system has maximum 18GB memory per core. Anything else you would suggest?