CUBE and periodic conditions

Bader charge density analysis

Moderator: moderators

Post Reply
vozny
Posts: 14
Joined: Thu Mar 01, 2007 2:30 am

CUBE and periodic conditions

Post by vozny »

Hi,
I'm trying to use your program for analysis of SIESTA densities.
I've made the conversion to CUBE format and checked in XCrysDen that it is done correctly.

However I still have a question concerning the grid on boundaries. CUBE itself has nothing to do with periodic cells and usually you will get the grid points exactly on the boundaries that you specify (at least I did so, similar to XCrysDen internal format).

For example, for a box from 0 to 3 and 4 grid points, the grid coordinates would be 0, 1, 2, 3.

Now when I have a periodic cell, the grid points on one of the sides (left or right) of the box are not needed, since they would be available from the next periodic cell.
How do you deal with this when you read CUBE files?
Should I construct my CUBE file so that the last row (column) of the grid is just removed?

Like this: for a box from 0 to 3 take just 3 grid points with the grid coordinates 0, 1, 2.

---------------------------------------------

Another question is what happens when I have to take not a whole cell but part of it (due to memory limitations imposed by your program - segmentation error). It is obvious that some atoms' charges would be reproduced only partially.
Can the program deal correctly at least for the rest of the atoms?
graeme
Site Admin
Posts: 2256
Joined: Tue Apr 26, 2005 4:25 am
Contact:

Post by graeme »

These are very good points, and something that probably needs work in our program.

We use periodic boundary conditions for all file types, including CUBE files. This does not matter if the charge density goes to zero near the cell boundaries, and it was more convenient to keep one convention in the code (we are primarily interested in analyzing periodic plane wave calculations).

Orignally, we set up our CUBE reader using the Gaussian definition in which the first and last grid points are at the box edges. Thus, the CUBE format had one more grid point for a given box size and grid spacing than a vasp CHGCAR file does.

Currently, however, the code is set up to read cube files in the same way that vasp chgcar files are read - so that the spacing between grid points times the number of grid points is the cell length. I think we made this change because someone was using a CUBE file generated with a converter from another DFT program. So it does make sense for you to remove the duplicated last entries in each dimension.

If anyone knows more about this, we would like to hear if there are any standard conventions for CUBE style files generated from periodic codes. Even better, our cube reader cube_mod.f90 is fairly simple, so if anyone wants to modify it or add a reader for a different type of charge density file, we would happily include those changes or additions.
vozny
Posts: 14
Joined: Thu Mar 01, 2007 2:30 am

Post by vozny »

Then I'm quite surprised that the program managed to calculate something for a part of the periodic slab.

I mean, I've cut it to make smaller to reduce memory usage, so that upper side of the slab ends up with 0 density, while from the bottom it is really cut through atoms.

If you say it assumes the cell to be periodic,
how the program dealt with this discontinuity on this boundary?
graeme
Site Admin
Posts: 2256
Joined: Tue Apr 26, 2005 4:25 am
Contact:

Post by graeme »

As long as the density goes to zero at the discontinuity, the analysis might be well behaved in a subregion of the grid. The algorithm follows grid points up the charge density. Any trajectory originating in the bottom part of the cell will not cross to the top part, because the density is zero there. I didn't think of this before, but one could analyze a subregion by adding a buffer of zero density at the edge of the charge density grid.

We could also think about ways to reduce the memory usage. I imagine that you can calculate larger systems with SIESTA than with VASP, so memory will be a bigger concern for you. One quick modification you could try is to reduce the size of the q2 variable in kind_mod.f90. If you make it the same as q1 (single precision), the analysis should not be significantly affected, and the memory should drop significantly. There also might be a compiler option to treat floats with single rather than double precision.
Post Reply