• thedeadwalking4242@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      A bare metal OS is an OS running outside of a hypervisor. Virt-manager is a class 1 hypervisor that allows you to host guest operating systems. ( Run vms )

      • sorter_plainview@lemmy.today
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        Hey sorry for the confusion. What I meant is Proxmos is considered as a bare metal hypervisor and Virt manager is a hypervisor inside an OS, right?

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          7 months ago

          Technically no, both use kvm virtualization which is included in the Linux kernal, so both are “bare metal hypervisors” other wise know as class 1 hypervisors. Distinctions can be confusing 😂

            • boredsquirrel@slrpnk.net
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              Bare metal is “kernel running on hardware” I think. KVM is a kernel feature, so the virtualization is done in kernel space (?) and on the hardware.

                • boredsquirrel@slrpnk.net
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 months ago

                  TL;DR: use what is in the kernel, without strange out of tree kernel modules like for VirtualBox, and use KVM, i.e. on fedora virt-manager qemu qemu-kvm

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          *Proxmox

          Virtual manager is a application that connects to libvirtd in the back end. Think of it as a web browser or file manager for VMs.

          Proxmox VE is an entire OS built for virtualization on dedicated servers. It also has support for clusters and live VM migrations between hosts. It is in essence a server OS designed to run in a data center (or homelab) of some kind. If is sort of equivalent to vSphere but they charge you per CPU socket for enterprise support and stability

          • sorter_plainview@lemmy.today
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            Well this thread clearly established that I neither have technical knowledge and I don’t pay attention to spelling…

            Jokes aside this is a good explanation. I have seen admins using vSphere and it kind of makes sense. I’m just starting to scratch the surface of homelab, and now started out with a raspberry pie. My dream is a full fledged self sustaining homelab.

            • Possibly linux@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              If you ever want to get a Proxmox cluster go for 3-5 identical machines. I have a 3 totally different machines and it creates headaches

              • DrWeevilJammer@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                What kind of headaches are you having? I’ve been running two completely different machines in a cluster with a pi as a Qdevice to keep quorum and it’s been incredibly stable for years.

                • Possibly linux@lemmy.zipOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  7 months ago

                  One device decided to be finicky and the biggest storage array is all on one system.

                  It really sucks you can’t do HA with BTRFS. It is more reliable than ZFS due to licensing

        • Kazumara@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          They both use KVM in the end, so they are both Type 1 hypervisors.

          Loading the KVM kernel module turn your kernel into the bare metal hypervisor.