I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.

  • okuhiko@lemmy.world
    link
    fedilink
    arrow-up
    107
    ·
    1 year ago

    Google just has a lot of storage space. They have dozens of data centers, each of which is an entire building dedicated to nothing but storing servers, and they’re constantly adding more servers to previous data centers and building new data centers to fit even more servers into once the ones they have are full.

    IIRC, estimates tend to put Google’s current storage capacity somewhere around 10-15 exabytes. Each exabyte is a million terabytes. Each terabyte is a thousand gigabytes. That’s 10-15 billion gigabytes. And they can add storage faster than storage is used up, because they turn massive profits that they can use to pay employees to do nothing but add servers to their data centers.

    Google is just a massive force in terms of storage. They probably have more storage than any other organization on the planet. And so, they can share a lot of it for free, because they’re still always turning a profit.

    • green_light_stop@kbin.social
      link
      fedilink
      arrow-up
      36
      arrow-down
      1
      ·
      edit-2
      1 year ago

      There are also techniques where data centers do offline storage by writing out to a high volume storage medium (I heard blueray as an example, especially because it’s cheap) and storing it in racks. All automated of course. This let’s them store huge quantities of infrequently accessed data (most of it) in a more efficient way. Not everything has to be online and ready to go, as long as it’s capable of being made available on demand.

    • reliv3@reddthat.com
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      1 year ago

      Let’s be honest, it isn’t “free”. The user is giving their own data to Google in order to use there services; and data is a commodity.

      • Zoot@reddthat.com
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Kinda starting to seem like “data” is becoming less and less valuable, or am I wrong?

        • Still@programming.dev
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          well there’s more and more of it so the value per byte is decreasing as everything tracks you and there’s only so much info you can get

    • jrs100000@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      And thats just Google. Amazon and Microsoft also run also have massive massive data capacity that runs large chunks of the internet. And then you get into the small and medium sized hosting companies, that can be pretty significant on their own.

    • obviouspornalt@lemmynsfw.com
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      15 exabytes sounds low. Rough math, 1 20 TB hard drive per physical machine with 50,000 physical machines is one exabyte raw storage. I bet 50,000 physical machines is a small datacenter for Google.

    • WhoRoger@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s still wild to imagine. That’s amillions hard drives, times a couple times over for redundancy over regions and for failures. Then the backups.

      Remember when Google started by networking old office computers?

  • assembly@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    4
    ·
    1 year ago

    It’s the same story with AWS as well. They use vast amounts of storage and leverage different tiers of storage to get the service they want. It’s funny but they have insane amounts of SD cards ( cheapest storage available at the size) and use that for some storage and just replicate things everywhere for durability. Imagine how small 256GB SD cards are and they you have hardware to plug-in 200 of them practically stacked on top of each other. The storage doesn’t need to be the best, it just needs to be managed appropriately and actively to ensure that data is always replicated as devices fail. That’s just the cooler tier stuff. It gets complex as the data warms.

      • asteroidrainfall@kbin.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yeah this seems false. SD cards are unreliable, hard to keep track of, and don’t actually store that much data for the price. I do think they use tapes though to store long term, low traffic data.

        • assembly@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Used to work there but it’s been a few years so maybe things changed but that was how we originally got super cheap and durable s3.

        • zalack@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          We use LTO tapes in Hollywood to back up raw footage; it wouldn’t surprise me if AWS uses tapes for glacier.

          I got a tour of Iron Mountain once (where we sent tapes for long term archival). They had a giant room with racks and racks of LTOs, and a robot on rails that would make copies of each tape at regular intervals to keep the data from corrupting. It looked kinda like the archive room in Rogue One. Wouldn’t surprise me if Iron Mountain was an inspiration for the design. Super interesting.

          • LostXOR@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That sounds really cool! I wonder what sort of redundancy they have in case a tape gets damaged or corrupted.

    • vyvanse@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Ha, I had no idea data centers use SD cards! It makes sense in hindsight, but it’s still funny to think about

  • Generator@lemmy.pt
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    Not only that but for each video on YouTube there are different versions for each resolution. So if you upload a 1080p video, it gets converted to 1080p AVC/VP9, 720p AVC/VP9, 480p… also for the audio.

    If you run youtube-dl -F <youtube url> you will see different formats.

    • Falmarri@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Does youtube actually store copies of each one? Or does it store 1 master copy and downsaple as required in real time. Probably stores it since storage is cheaper than cpu time

      • Generator@lemmy.pt
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        If it converts every video in realtime it will require a lot of CPU per server, it’s cheaper to store multiple copies. Also the average video isn’t more than some 300MB, less if it’s lower quality.

        Anyone with Plex or Jellyfin knows that it’s better to have the same movie in both qualities (1080,720) the transconding to avoid CPU usage.

        It’s possible to have fast transconding with GPUs, but with high so many users on youtube that will require a lots of power and high energy prices, store is cheaper.

        • patsharpesmullet@vlemmy.net
          link
          fedilink
          arrow-up
          1
          arrow-down
          6
          ·
          1 year ago

          It’s transposed on the fly, this is a fairly simple lambda function in AWS so whatever the GCP equivalent is. You can’t up sample potato spec, the reason it looks like shit is due to bandwidth and the service determining a lower speed than is available.

            • patsharpesmullet@vlemmy.net
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              That response is almost 10 years old and completely outdated. I’ve designed and maintained a national media service and can confirm that on the fly transcoding is both cheaper and easier. It does make sense to store different formats of videos that are popular at the minute but in the medium to long term streams are transcoded.

              • mangomission@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Sure it’s old but the stats I posted in a lower comment show that at YouTube’s scale, it makes sense to store.

              • mangomission@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Do you have a source? My instinct is the opposite. Compute scales with users but storage scales with videos

                • NewNewAccount@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  No source but I imagine the amount of videos must be outpacing the amount of users. Users come and go but every uploaded video stays forever.

                • SHITPOSTING_ACCOUNT@feddit.de
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Consider two cases:

                  • the most recent MrBeast video receiving millions of views from all kinds of devices (some of which require specific formats)
                  • a random video of a cat uploaded 5 years ago, total view count: 3

                  Design a system that optimizes for total cost.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It probably depends on how popular the video is anticipated to be.

        I remember hearing that something like 80% of uploads to YouTube are never watched. 80% of the remaining 20% are watched only a handful of times. It’s only a tiny fraction that are popular, and the most popular are watched millions of times.

        I’d guess that they don’t transcode the 80% that nobody ever watches. They definitely transcode and cache the popular 4%, but who knows what they do with the 16% in the middle that are watched a few times, but not more than 10x.

      • WhoRoger@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        In real time would mean more cpu usage every time someone plays it. If converted in advance, they only need to do it once with the most effective codecs.

    • AnonymousLlama@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’m keen to know about how large these source files are for YouTube compared to the 720/1080 quality ones were see on the front-end. I remember them using really impressive compression but that the bitrate was super low to keep the since small.

      If they’re reducing a 10m 1080p file from 400MB down to 40MB then that’s a good gain

  • DontTreadOnBigfoot@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    1 year ago

    Absolutely huge data centers.

    A full third of my towns real estate is currently covered with a sprawling Google data center. Just enormous.

          • green_dragon@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            I can’t fathom that amount stored there in addition to the amount of data traffic occurring. Those fibers are on fire; coming in the centers in 3 foot tubes! For some reason they don’t appear on google image search. ;)

    • blivet@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah, 10 or 15 years ago I read an article about how Google brings up new storage modules when they need to expand, and their modules are essentially shipping containers full of hard drives.

  • igetzerobread@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    Enormous servers all around the world and over the years storage is getting smaller and smaller proportionally to how much you can store on it

  • Jmr@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    YouTube isn’t even profitable yet. Google pours billions into storage and compute, so does Amazon and Microsoft and all the others. They have so much space we probably can’t even comprehend it

  • tentphone@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    Twitter probably doesn’t take to that much space (comparatively) because it’s mostly text with some images.

    YouTube is another matter. There’s an enormous amount of content uploaded to YouTube, as much as 30,000 hours of video uploaded per hour. That’s around 1PB per hour assuming most videos are uploaded in 1080p.

    I wasn’t able to find an official source for what YouTube’s total data storage is, but this estimate puts it at 10 EB or 10,000,000,000 GB of video.

    On Amazon AWS that would cost $3 Billion per month to store. The actual cost to Google is probably much lower because of economy of scale and because it is run by and optimized for them, but it is still a colossal figure. They offset the cost with ads, data collection, and premium subscription, but I would imagine running YouTube is still a net loss for Google.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      1 year ago

      I’m generally the first to criticize Google, but when it comes to pushing ads on YouTube I’m having a hard time really condemning them for it. I struggle to wrap my head around how this service can exist at all.

      Also, second to direct transactions, I’d much rather have Google make money through ads than anything else.

      • 🇺🇦 Max UL@lemmy.pro
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Agreed, I pay for YouTube premium and in the world of corporate crap and fees and stuff I’m ok with that value trade off relatively. Hell, I would have paid for Reddit, too, if they weren’t assholes.

        Edit: I mistyped Google premium instead of YouTube premium… same place though of course

        • NightOwl@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Issue has become that in this era of business you could drop 100k on a car and they’ll still data mine information on you and record you. So you really only paid to be less annoyed, but the tracking remains a core part of the system.

          Now some stuff like proton email do make privacy a part of their business, but that is becoming rarer. Everyone is the product by default no matter how much money they pay for a service these days.

      • NightOwl@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        They’ll make it through data collection too even if you pay for premium. You are still the product even if you pay in this era.

    • seeCseas@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I would imagine running YouTube is still a net loss for Google.

      I doubt it, youtube generates about 30 billion in revenue per year!

      • moon_matter@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        It gets even crazier when you realize they are sort of obligated to keep every video forever. So it will just keep growing indefinitely since they have no way to trim it down. We may eventually reach a point where the majority of the content that they host is older than most living people and the uploader has since passed on.

        • WhoRoger@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          They won’t, eventually they’ll pull a Imgur and start deleting stuff that hasn’t been accessed in a while.

          I mean didn’t they just announce they’ll start deleting inactive accounts?

          But even if not, storage always becomes cheaper with time, so it’s just a matter of copying old data to a newer medium. Eventually that will become an issue, but for now, capacity and storage density keeps growing.

          • moon_matter@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            I mean didn’t they just announce they’ll start deleting inactive accounts?

            They stated they would delete the accounts but that the videos would remain. But obviously the policy could change. My point was more that a ton of people would be watching content that was uploaded by and for people who are no longer alive. Which makes me feel uncomfortable in a way I can’t quite describe. Like a modern version of seeing a ghost.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      If it’s really 1 PB per hour, and mostly 1080p or higher (which seems likely, unlike the assumptions in that Quora answer) then they would fill about 9 EB every year! Obviously the rate would be lower in the past, but that 30k link was a number as of a year ago anyway.

    • skulblaka@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      If I remember correctly YouTube has been run at a loss basically since its inception, but it’s such a popular platform (and such an efficient vehicle for advertisement) that they keep it running. Google makes up the difference elsewhere. It’s like Costco’s loss leader hot dogs, it literally costs them money to sell it to you, but it gets you inside the store where you’re likely to buy other stuff. YouTube costs Google money to maintain, but it gets people creating Google accounts and watching ads, and recently over the past few years it also gets people buying YT Premium.

      Besides, so much propaganda of all sorts is channeled through YouTube that if Google ever seriously considered shutting it down I expect they’d have a boardroom full of shareholders immediately putting their foot down about it. YouTube is no longer about the cost, it’s about the platform accessibility and the existing userbase.

  • UntouchedWagons@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    For twitter it’s not that complicated because tweets are quite short and text compresses very well. The pictures and videos people upload are of course another story, I’m not sure what Twitter uses as a backend for anything though.

    • Carlos Solís@communities.azkware.net
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      From what I gather, Twitter uses Google as its Simple Storage Service (S3), which is one of the main reasons why them dine-and-dashing Alphabet Inc. with the bill for server usage was so severe for their backend

  • Eavolution@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I was recently doing a tour of CERN in Geneva, and they actually still store data on tape because of its cost and reliability over hard drives!

    • perviouslyiner@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Apparently tape is nowhere near the magnetic density limit that hard drives are hitting, so tape is just going to continue to improve in future

  • UnanimousStargazer@feddit.nl
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I might be mistaken, but didn’t Twitter run out of server space because they decided to not continue their contract with Google?

  • BudgieMania@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Three additional things that you have to keep in mind are that:

    1 - Enterprise storage is much, much denser (as in, capacity per physical space occupied) than you would expect.
    2 - These systems have capacity recovery features (primarily compression and deduplication) that save a lot more storage than you would expect.
    3 - The elements in the infrastructure are periodically refreshed by migrating them to newer infrastructure (think of how you could migrate two old 500GB disks to a single modern 1TB disk to save the physical space of a disk).

    As an example about point 1, this is what IBM advertises in their public whitepaper for their Storage Scale systems (https://www.ibm.com/downloads/cas/R0Q1DV1X):

    “IBM Storage Scale System is based on proven IBM Storage 2U24 hardware that can be expanded with additional storage enclosures up to 15.4PB of total capacity in a single node and 633YB in a single cluster. You can start with 48TB using half-populated flash nodes or create a fully-populated NVMe flash solution with 24, 2.5” drives in capacities of 3.84TB, 7.68TB, 15.36TB or 30TB. Using the largest capacity 30TB NVMe drives, up to 720TB total flash capacity, in a 2U form factor, along with associated low weight and low power consumption. Adding storage enclosures is easy as up to 8 enclosures (each 4u with 102 drives) can accommodate up to 816 drives of 10TB, 14TB or 18TB or 14.6PB of total raw HDD capacity.”

    In short, you end up packing a stupid amount of storage in relatively moderate spaces. Combined with the other two points, it helps keep things somewhat under control. Kinda.

  • BoofStroke@lemm.ee
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Scalable shared storage. Scalable database, web, and cache servers. Using AWS terminology, it’s elastic. The Multi-zone storage my own company, for example, is practically infinite and scales automatically.