It feels like the new update is slower than before. Anyone else noticed this?

Could be temporary maybe, or something that will be fixed in a update.

But i get timeouts loading pages even, and that never ever happened before.

  • @tal
    link
    2
    edit-2
    6 months ago

    I can narrow down the time a bit, since I was listening to some audio and transcripting/summarizing it in a comment series, and things broke right in the middle.

    My last comment to make it out to another instance looks to be this one, starting with “Bergman: Michael, I want to turn to you”:

    https://lemmy.today/comment/4228981

    Visible on the remote instance as this:

    https://kbin.social/m/Ukraine_UA/t/716867/Aid-to-Ukraine-and-the-Future-of-the-War-with#entry-comment-4244549

    The first to fail to propagate – to that same instance – was the response to that, starting with “Bergman: Michael, I think there’s another”

    It’s also not just the kbin.social instance; other instances like sh.itjust.works also are affected, to rule out it being a kbin.social problem.

    EDIT: Both the sh.itjust.works instance federation list and the lemmy.today instance federation list show the other instance as being federated:

    https://lemmy.today/instances

    https://sh.itjust.works/instances

    EDIT2: I don’t think that there’s much else beyond what I’ve written above that I can do from a troubleshooting standpoint as a non-admin user – but if there is, feel free to let me know, @mrmanager.

    • MacN'Cheezus
      link
      English
      36 months ago

      Yeah, for me it is Lemmy.world and Lemmy.ml. The latter is also running on 0.19.1, and the former is still on 0.18.5, so the remote server software or version does not seem to be a factor here.

      • @tal
        link
        1
        edit-2
        6 months ago

        Oh, that’s a good point. We can see server versions elsewhere.

        Here’s a comment from @nutonic@lemmy.ml, one of the lemmy devs, to a remote community on lemmy.world:

        https://lemmy.ml/comment/6801426

        https://lemmy.world/comment/6173591

        That comment appears to have propagated.

        While I cannot say for certain that at that point lemmy.ml had already updated to 0.19.1, it was after lemmy.today had, so it seems plausible.

        So whatever the problem is, I would lean towards guessing that it does not affect all 0.19.1 instances.

        EDIT: Other users are talking about potential federation problems in this thread, where votes appear not to be making it out:

        https://lemmy.ml/post/9624005?scrollToComments=true

        But as someone there points out, the first two users there put up a comment from a 19.1 instance (lemm.ee and sopuli.xyz), and their comments did make it out.

        • MacN'Cheezus
          link
          English
          36 months ago

          Well, as I just said here, all my comments and posts from the last 24 hours JUST started showing up on those other instances, so perhaps the problem is fixed now.

          Maybe there was some sort of database backlog or something, let’s see what /u/mrmanager says.

          • @mrmanagerMA
            link
            3
            edit-2
            6 months ago

            I think it was Lemmy software stopping to federate for some reason, and after i did a restart of Lemmy, it federated everything in the queue right away. But its worrying that this can happen, and I assume its a bug somewhere hiding in the software still.

            • MacN'Cheezus
              link
              English
              16 months ago

              Thanks for looking into this. Let’s hope the devs will get this sorted out eventually.

          • @tal
            link
            26 months ago

            Oh, thanks for the update!

    • @mrmanagerMA
      link
      26 months ago

      I can see the comment on kbin now after restart of Lemmy software:

      It really seems like outgoing federation is buggy in this version. :/

      • @tal
        link
        1
        edit-2
        6 months ago

        Thanks!

        A warning, though…after your restart, I just commented in that other lemmy.ml thread talking about 0.19 federation problems on lemmy.ml and linked to this thread – given that you identified an important data point – and that comment doesn’t appear to have propagated. That’ll be a comment added to the queue without a restart after the comment was made. Now, it was only 5 minutes ago that I made the comment, so maybe I’m just excessively impatient, but…

        The local view of the thread:

        https://lemmy.today/comment/4245923

        The remote view of the thread:

        https://lemmy.ml/post/9624005?scrollToComments=true

        My comment text:

        We were just discussing some potentially-0.19.1-related federation problem that lemmy.today users were experiencing after the update; that’s how I ran across this thread.

        https://lemmy.today/post/4382768

        The admin there, @mrmanager@lemmy.today, restarted the instance again some hours later to attempt to resolve the problem, and it looked like federation started working at that point.

        That might be worth consideration if any other instances are seeing problems with posts/comments/votes not propagating.

        • @mrmanagerMA
          link
          26 months ago

          Indeed, it hasnt federated. So the restart of Lemmy polls the queue and then it stops working again. :/

        • @mrmanagerMA
          link
          26 months ago

          I did another restart and your comment shows up in the thread.

          So seems to be some bug that makes it stop federating after it has polled the queue once.

          • @tal
            link
            2
            edit-2
            6 months ago

            Hmmm.

            A couple thoughts:

            • As I commented above, this doesn’t appear to be impacting every 0.19.1 instance, so there may be something on lemmy.today that is tickling it (or it and some other instances).

            • If you decide that you want to move back to 0.18.x, I have no idea whether lemmy’s PostgreSQL databases and stuff support rolling back, while continuing to use the current databases, whether there were any schema changes in the move to 0.19.x or whatever.

            • Something that also just occurred to me – I don’t know what kind of backup system, if any, you have rigged up, but normally backup systems backing up servers running databases need to be aware of the database, so that they can get an atomic snapshot. Like, if you have something that just backs up files nightly or something, they may not have valid, atomic snapshots of the PostgreSQL databases. If you do attempt a rollback, you might want to bring all of the services down, and only while they are down back up the PostgreSQL database. That way, if the rollback fails, it’s at least possible to get back to a valid copy of the current 0.19.1 state as it is in this moment.

              If all that’s old hat and you’ve spent a bunch of time thinking about it, apologies. Just didn’t want a failed rollback to wind up in a huge mess, wiping out lemmy.today’s data.

            • @mrmanagerMA
              link
              26 months ago

              I would like to roll back but the database schema changes would mean having to restore a backup. And potentially end up in more issues just like you were thinking too.

              I guess it’s best to wait for a fix, and I will also see if I can troubleshoot this myself a bit. I’m guessing it’s a database issue since I can see very long running update statements on every restart, and they may not be able to complete for some reason.

              • @tal
                link
                1
                edit-2
                6 months ago

                Possibly relevant:

                https://github.com/LemmyNet/lemmy/issues/4288

                This was the bug for the original 0.19.0 federation problems, and admins are reporting problems with 0.19.1 there as well.

                The lemmy devs reopened the bug four hours ago, so I’m guessing that they’re looking at it. Not sure if you want to submit any diagnostic data there or whatnot.

                • @mrmanagerMA
                  link
                  26 months ago

                  Thank you, very good to know.

                  My idea was to try and see what specific query is failing in the database and go from there, so currently enabling logging of failed postgres queries. Hopefully see something in those logs…