Dear Lammps users and Jeremy,

I have a small question regarding data filtering in ATC.

when I turn on filter option for ATC hadry calculation,

I have a segmentation fault error in:

ATC_TransferHardy::pre_init_integrate()

...

time_filter_pre (dt);

...

timeFilters_(index)->apply_pre_step1...

...

TimeFilterCrankNicolson::apply_pre_step1

TimeFilterExponential::update_filter

the matrix "unfilteredQuantityOld" is not initialized (has a zero size), thus I get segmentation fault error when trying to summ matrices in update_filter.

filteredQuantity and unfilteredQuantity are of proper size.

I'm aware that in documentation it's written "only for be used with specific transfers: thermal, two_temperature", but i would say there is just an initialization of unfilteredQuantityOld missing,

is it the case?

Could you also please give a reference to a filtering algorithm used, so that it's easier to understand what's going on.

Thank you in advance,

Kind regards,

Denis.

Dear Jeremy,

After digging more into the ATC code,

this seems to be a solution:

in function

void ATC_TransferHardy::init_filter()

add the following line

timeFilters_(index)->initialize(filteredHardyData_[name]);

right before the end of the loop.

Could you please confirm that?

p/s/ I'm sorry for a typo in Subject,

should be *questions

Regards,

Denis

Hi Denis, so the documentation is correct that we do not support the currently implemented filtering operations for Hardy. To see why, you can read the reference by Wagner et al. 2008 (section 5) mentioned in the on-line docs. The filters you are trying to use are implemented by relating them to ODEs in time, which are then discretized. In temperature/thermal, the data is updated frequently so these quantities can be accurately integrated. With Hardy this is not the case because it is only used at post-processing steps in it's current implementation, which is too coarse a time scale to perform a meaningful numerical integration.

Unfortunately, the code is somewhat confusing on this point because it was released while we were implementing a top-hat time filter for Hardy. This means functions like init_filter are in place, but without the supported infrastructure. I hope this helps answer your question.

best,

Jeremy

Hi Jeremy,

Thank you for your prompt reply.

It's clear that Hardy is just a post-processing, but

nevertheless I would say filtering

(via time-kernel or simple averaging)

should still be done in order to have reasonable results.

I guess the question is how frequent hardy calculations are,

say every 10^1 steps, with output of every 10^3 from the total of

10^6 steps (of heating, for example) seems reasonable to me to apply a filter.

Maybe simple averaging (like ave/time) is a better idea, but

exponential averaging should also work given reasonable sampling for ODE (35, in Wagner et al)

integration, or am I missing something? What would you say from your experience?

By top-hat you mean "TimeFilterStep" class, that is averaging in Real space?

Sincerely yours,

Denis.

Hi Denis, the key issue with the exponential filter *as implemented in our code* is the accuracy of the time integration. I have only used it in the coupling modes (thermal,ttm) with the MD time step (typically less than 1 fs). If you try it with Hardy and have success, please let us know, but I would compare what you get in that mode with a direct application of the exponential filter to a trial case to verify it's accuracy. Finally, the "TimeFilterStep" implements a top hat filter as an alternative.

Jeremy

Hi Jeremy,

I will see what I have and let you know if exponential filter

(applied via ODE solving) gives reasonable results compared to

direct post-processing application.

By the way, i guess you usually sample every MD time-step?

What is your filter time-scale (in comparison with, say,

time-step for metals of 0.001 ps)?

Regards,

Denis.

Hi Denis:

Hi Jeremy,

I will see what I have and let you know if exponential filter

(applied via ODE solving) gives reasonable results compared to

direct post-processing application.

By the way, i guess you usually sample every MD time-step?

What is your filter time-scale (in comparison with, say,

time-step for metals of 0.001 ps)?

For dynamical coupling, we currently use concurrent time-stepping between the MD and FE, although others have looked at methods to integrate the FE using larger timesteps. The time filtering scale is a matter of choice, although given the current finite element constitutive models we use, the only thermodynamically correct choice is infinity. Since this is difficult to implement, we tend to use ~1 ps, which seems to work well for our thermal problems.

Jeremy

Hi Jeremy,

Thank you for your detailed answer.

Best regards,

Denis.