• creamlike504@jlai.lu
    link
    fedilink
    English
    arrow-up
    27
    ·
    2 months ago

    I was with you until the last step. How did it all get sorted, instead of having two “peaks”?

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        17
        ·
        2 months ago

        Then what is the purpose of the middle steps?
        Second to last also has this problem.

        The image has big “Draw the rest of owl” energy.

        • homura1650@lemm.ee
          link
          fedilink
          arrow-up
          14
          ·
          edit-2
          2 months ago

          I think the image assumes that the viewer is familiar with merge sort, which is something you will learn in basically every undegraduate CS program, then never use.

          To answer your first question, it helps to have something to compare it against. I think the most obvious way of sorting a list would be “insertion sort”, where you look through the unsorted list, find the smallest element, put that in the sorted list, then repeat for the second smallest element. If the list has N elements, this requires you to loop through it N times. Since every loop involves looking at N elements, this means you end up taking N * N time to sort the list.

          With merge sort, the critical observation is that if you have 2 sublists that are sorted you know the smallest element is at the start of one of the two input lists, so you can skip the inner loop where you would search for the smallest element. The means that each layer in merge sort takes only about N operations. However, each layer halves the number of lists, so you only need about log_2(N) layers, so the entire sort can be done in around N * log(N) time.

          Since NlogN is smaller then N^2, this makes merge sort theoretically better.

          • skibidi@lemmy.world
            cake
            link
            fedilink
            arrow-up
            10
            ·
            2 months ago

            Note: N^2 and NlogN scaling refer to runtime when considering values of N approaching infinity.

            For finite N, it is entirely possible for algorithms with worse scaling behavior to complete faster.

  • Karl@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    I take pride in understanding a joke in programminghumor for the first time. (I just started with computer-science)

    • SatouKazuma@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 months ago

      Oof you picked the wrong field. Honestly, programmer culture is elitist and toxic as fuck, speaking as someone who made that mistake before.

      • MonkeMischief
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Sorry it sounds like you’ve had some bad experiences, but let’s try not to fall into that trap.

        Any field with any kind of specialization in our “competitive market culture” seems to go this way. People hone their skills, learn more, and yet they need to constantly find reasons to justify their existence and skillset in a world trying to replace them at every turn, so things get toxic and everybody’s got a “one true way” because everyone sees each other as competition. It’s an adversarial system top to bottom.

        For honest working people no field is “the right field” right now because we’re sandwiched between techbro grifters and idiot despots who are both committed to ruining everybody’s job prospects.

        Everybody’s on edge, everybody’s pissed, everyone has to show off “I know what I’m doing more than these people” so they can keep the lights on. I do wish that culture would stop sabotaging itself with toxicity, but that’s another matter.

        We shouldn’t discourage people from pursuing something that fascinates them, unless maybe they indicated “I only took compsci because I heard the money was good and the people are cool.” . . .which, in the right environment isn’t even untrue, necessarily.