It has happened to me several times already that even with very decent interpolation grids I obtain gap distributions that are very spiky (an example where it looks quite disturbing: https://ibb.co/SPWLwfk).
I was wondering if that is in fact not an interpolation problem, and I tried to look for an answer in the corresponding routine called gap_distribution_FS. I think it all boils down to this double loop from io_eliashberg.f90:
Code: Select all
delta_max = 1.1d0 * maxval(Agap(:,:,itemp))
nbin = NINT(delta_max / eps5) + 1
dbin = delta_max / dble(nbin)
IF ( .not. ALLOCATED(delta_k_bin) ) ALLOCATE( delta_k_bin(nbin) )
delta_k_bin(:) = zero
!
DO ik = 1, nkfs
DO ibnd = 1, nbndfs
IF ( abs( ekfs(ibnd,ik) - ef0 ) .lt. fsthick ) THEN
ibin = nint( Agap(ibnd,ik,itemp) / dbin ) + 1
weight = w0g(ibnd,ik)
delta_k_bin(ibin) = delta_k_bin(ibin) + weight
ENDIF
ENDDO
ENDDO
!
In that case, am I right to assume that an alternative to a better interpolation grid in this case would be just increasing degaussw?
Would you probably recommend something else to try?
Is it actually bad to have a spiky distribution? In most of the cases I encounter it, a better interpolation smoothens it out a bit, but there is nearly no difference in the gap distribution character itself. The whole problem just gives a slight impression that the results are not very well converged. Could you please give your opinion?
Thank you very much!
Mikhail