• pop@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    16
    ·
    edit-2
    9 months ago

    There’s no reason for amazonaws.com to be on search engine at all. Which is just as simple as placing a robots.txt with deny all declaration. Then no user would have to worry about shit like this.

    • Moonrise2473@feddit.it
      link
      fedilink
      arrow-up
      13
      ·
      9 months ago

      Who said that?

      Many other customers instead want to get that, maybe they are hosting images for their website on S3, or other public files that are meant to be easily found

      If the file isn’t meant to be public, then it’s the fault of the webmaster which placed it on a public bucket or linked somewhere in a public page

      Also: hosting files on Amazon S3 is super expensive compared to normal hosting, only public files that are getting lots of downloads should be using that. A document that’s labeled for “internal use only” should reside on a normal server where you don’t need the high speed or high availability of AWS and in this way you can place some kind of web application firewall that restricts access from outside the company/government.

      For comparison, it’s like taking a $5 toll road for just a quarter of mile at 2 am. There’s no traffic and you’re not in hurry, you can go local and save that $5

    • AmbiguousProps
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      robots.txt doesn’t have to be followed. It doesn’t block crawling.