One of Google Search’s oldest and best-known features, cache links, are being retired. Best known by the “Cached” button, those are a snapshot of a web page the last time Google indexed it. However, according to Google, they’re no longer required.

“It was meant for helping people access pages when way back, you often couldn’t depend on a page loading,” Google’s Danny Sullivan wrote. “These days, things have greatly improved. So, it was decided to retire it.”

  • Raiderkev@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    3
    ·
    9 months ago

    Without getting into too much detail, a cached site saved my ass in a court case. Fuck you Google.

    • lud@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      9 months ago

      It sucks because it’s sometimes (but not very often) useful but it’s not like they are under any obligation to support it or are getting any money from doing it.

        • megaman@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          At least some of these tools change their “user agent” to be whatever google’s crawler is.

          When you browse in, say, Firefox, one of the headers that firefox sends to the website is “I am using Firefox” which might affect how the website should display to you or let the admin knkw they need firefox compatibility (or be used to fingerprint you…).

          You can just lie on that, though. Some privacy tools will change it to Chrome, since that’s the most common.

          Or, you say “i am the google web crawler”, which they let past the paywall so it can be added to google.

          • sfgifz@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 months ago

            Or, you say “i am the google web crawler”, which they let past the paywall so it can be added to google.

            If I’m not wrong, Google has a set range of IP addresses for their crawlers, so not all sites will let you through just because your UA claims to be Googlebot

        • lud@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 months ago

          I dunno, but I suspect that they aren’t using Google’s cache if that’s the case.

          My guess is that the site uses its own scrapper that acts like a search engine and because websites want to be seen to search engines they allow them to see everything. This is just my guess, so it might very well be completely wrong.

      • icedterminal@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        Depends. Not every site, or its pages, will be crawled by the Internet Archive. Many pages are available only because someone has submitted it to be archived. Whereas Google search will typically cache after indexed.