Is it safe for data integrity to use a “non ECC mini pc” that runs docker containers from the volumes of a “NAS with ECC ram”?
Or does the mini pc also require ECC ram for the data integrity?
Sorry if it is a noob question.
Is it safe for data integrity to use a “non ECC mini pc” that runs docker containers from the volumes of a “NAS with ECC ram”?
Or does the mini pc also require ECC ram for the data integrity?
Sorry if it is a noob question.
The answers in this thread are surprisingly complex, and though they contain true technical facts, their conclusions are generally wrong in terms of what it takes to maintain file integrity. The simple answer is that ECC ram in a networked file server can only protect against memory corruption in the filesystem, but memory corruption can also occur in application code and that’s enough to corrupt a file even if the file server faithfully records the broken bytestream produced by the app.
ECC in the file server prevents complete filesystem loss due to corruptiom of key FS metadata structures. And it protects from individual file loss due to bitflips in the file server. It does NOT protect from the app container corrupting the stream of bytes written to an individual file, which is opaque to the filesystem but which is nonetheless structured data that can be corrupted by the app. If you want ECC-levels of integrity you need to run ECC at all points in the pipeline that are writing data.
That said, I’ve never run an ECC box in my homelab, have never knowingly experienced corruption due to bit flips, and have never knowingly had a file corruption that mattered despite storing and using many terabytes of data. If I care enough about integrity to care about ECC, I probably also care enough to run multiple pipelines on independent hardware and cross-check their results. It’s not something I would lose sleep over.