• ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      7 months ago

      Oh, people recommend VirtualBox all the time and it’s awful.

      • ProtonBadger@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        Ah well, I’ve used Virtualbox, Vmware and KVM and I found them all useful for my purposes. Vmware is very slick and has an edge on easy Gfx acceleration for Windows guests but since they’re now owned by Broadcom that might become a problem.

        I’m happy with Virtualbox on my desktop and KVM on a few servers. I don’t really care to take sides.

  • Frellwit@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    7 months ago

    Is there an equivalent or something similar to “Use host i/o cache” that VirtualBox have? Last time I tried virt-manager the install time of the vm was incredibly slow because of the terrible write speed to my hdd. Vbox fixes that issue with the host i/o cache setting.

    • d3Xt3r@lemmy.nzM
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 months ago

      Usually setting the cache mode to “none” gives the best performance, assuming you’re using the virtio interface, instead of SATA/SCSI. This is a common mistake most newbies make when installing Windows, because virt-manager defaults to the latter, which gives poor perfomance. The same goes for the network btw, you’d want to use the virtio network interface instead of the emulated NIC. So before you install a Windows guest, make sure you change both those interfaces.

      After changing the hardware interfaces, what you’d need to do (with Windows guests) is you’d need to supply the [virtio drivers](https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md, which you’ll need to provided to the Windows setup (via the virtio driver ISO) when prompted.

      But if you’ve already installed Windows, you’ll need to install all the virtio drivers first and then update the interfaces after you’ve powered off the VM.

      And in case you were wondering, this isn’t an issue with Linux guests, since virt-manager defaults to virtio hardware, and drivers aren’t an issue either.

  • sorter_plainview
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    7 months ago

    What is the difference between Virtual Machine Manager and Proxmos?

    • boredsquirrel@slrpnk.net
      link
      fedilink
      arrow-up
      17
      ·
      7 months ago

      Proxmox is an entire distro just for running virtual machines, with a web UI. Virt-manager is a program you install on a normal machine

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          7 months ago

          A bare metal OS is an OS running outside of a hypervisor. Virt-manager is a class 1 hypervisor that allows you to host guest operating systems. ( Run vms )

          • sorter_plainview
            link
            fedilink
            arrow-up
            4
            ·
            7 months ago

            Hey sorry for the confusion. What I meant is Proxmos is considered as a bare metal hypervisor and Virt manager is a hypervisor inside an OS, right?

            • thedeadwalking4242@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              7 months ago

              Technically no, both use kvm virtualization which is included in the Linux kernal, so both are “bare metal hypervisors” other wise know as class 1 hypervisors. Distinctions can be confusing 😂

              • sorter_plainview
                link
                fedilink
                arrow-up
                2
                ·
                7 months ago

                Oh dear… I really thought I understood what bare metal means… But looks like this is beyond my tech comprehension

                • boredsquirrel@slrpnk.net
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  7 months ago

                  Bare metal is “kernel running on hardware” I think. KVM is a kernel feature, so the virtualization is done in kernel space (?) and on the hardware.

            • Kazumara@discuss.tchncs.de
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              They both use KVM in the end, so they are both Type 1 hypervisors.

              Loading the KVM kernel module turn your kernel into the bare metal hypervisor.

            • Possibly linux@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              *Proxmox

              Virtual manager is a application that connects to libvirtd in the back end. Think of it as a web browser or file manager for VMs.

              Proxmox VE is an entire OS built for virtualization on dedicated servers. It also has support for clusters and live VM migrations between hosts. It is in essence a server OS designed to run in a data center (or homelab) of some kind. If is sort of equivalent to vSphere but they charge you per CPU socket for enterprise support and stability

              • sorter_plainview
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                Well this thread clearly established that I neither have technical knowledge and I don’t pay attention to spelling…

                Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I’m just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.

                • Possibly linux@lemmy.zipOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  If you ever want to get a Proxmox cluster go for 3-5 identical machines. I have a 3 totally different machines and it creates headaches

      • PlexSheep@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        It’s really just Debian with more packages preinstalled, but yeah, the idea is that you have an OS that has the primary purpose of virtualizing other machines.

          • PlexSheep@infosec.pub
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            Oh right, they ship a modified kernel, didn’t think of this. I also didn’t know about the zfs thing, my homelab uses btrfs.

            • Kazumara@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              I’m also using btrfs, but I originally wanted ZFS before seeing that it was only available through FUSE on my distro.

              That’s why I even noticed ZFS was one of the features of Proxmox :)