Is it safe for data integrity to use a “non ECC mini pc” that runs docker containers from the volumes of a “NAS with ECC ram”?

Or does the mini pc also require ECC ram for the data integrity?

Sorry if it is a noob question.

  • PriorProject@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 year ago

    The answers in this thread are surprisingly complex, and though they contain true technical facts, their conclusions are generally wrong in terms of what it takes to maintain file integrity. The simple answer is that ECC ram in a networked file server can only protect against memory corruption in the filesystem, but memory corruption can also occur in application code and that’s enough to corrupt a file even if the file server faithfully records the broken bytestream produced by the app.

    • If you run a Postgres container, and the non-ecc DB process bitflips a key or value, the ECC networked filesystem will faithfully record that corrupted key or value. If the DB bitflips a critical metadata structure in the db file-format, the db file will get corrupted even though the ECC networked filesystem recorded those corrupt bits faithfully and even though the filesystem metadata is intact.
    • If you run a video transcoding container and it experiences bitflips, that can result in visual glitches or in the container metadata being invalid… again even if the networked filesystem records those corrupt bits faithfully and the filesystem metadata is fully intact.

    ECC in the file server prevents complete filesystem loss due to corruptiom of key FS metadata structures. And it protects from individual file loss due to bitflips in the file server. It does NOT protect from the app container corrupting the stream of bytes written to an individual file, which is opaque to the filesystem but which is nonetheless structured data that can be corrupted by the app. If you want ECC-levels of integrity you need to run ECC at all points in the pipeline that are writing data.

    That said, I’ve never run an ECC box in my homelab, have never knowingly experienced corruption due to bit flips, and have never knowingly had a file corruption that mattered despite storing and using many terabytes of data. If I care enough about integrity to care about ECC, I probably also care enough to run multiple pipelines on independent hardware and cross-check their results. It’s not something I would lose sleep over.

  • yiliu@informis.land
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    First, why is every post on this forum -1? Somebody must be holding a grudge.

    Second: it doesn’t matter. ECC just prevents bit flips in RAM, once data leaves a system it’s irrelevant whether it had ECC or not.

    I’ve been running servers of various kinds for decades. There is a difference between running servers on hardware with ECC vs none, but it’s not a big deal. Unless you’re running, like, banking software or something where accuracy or uptime is critical…I wouldn’t sweat it. You may just have to reboot cuz of a kernel panic once or twice a year.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    While ECC memory is nice to have, potential data integrity issues can be mitigated against by using a file system with sufficient redundancy and checksum error correction like zfs or btrfs.

    Just run a regular scrub for errors to be auto-corrected from the extra copies.

    • SmallAlmond@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      That would be the plan, the NAS with ECC would run zfs with weekly scrubs (4 to 6 drives)

      Edit: now running ECC on devices with critical data or databases