when uploading a larger amount of files at once the worker uses a lot of RAM which it doesnot free up after the processing and publishing is finished:
After restarting the worker:
docker-compose stop worker
docker-compose start worker
the ram usage is:
Is it intended to restart the worker from time to time? Is this the expected behavior? We had 42000 files to be published at once and this exceeded the 16 GB RAM and killed the worker. When uploading and publishing it in smaller batches it works with restarting the worker.
In theory, each entry is processed individually and necessary memory used during processing of each entry should be free after the processing. Therefore, processing 42k small entries should not require too much memory, nor should it keep the memory.
Of course this is theory. In practice there is probably a memory leak somewhere. We should definitely investigate this. But, due to the complexity of the whole machinery, these things are hard to avoid and test for. Celery has foreseen this and there is a feature that usually helps.
You can add a setting
--max-tasks-per-child with an int number to the celery command in the
docker-compose.yaml. This will automatically restart a worker after a given number of tasks have been run on a worker. You could set this to 100 or 1000 for example.
Ok thanks, that is a good workaround