Proxmox VE is a complete open source server virtualization management solution. It is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.
4.0 Proxmox VE 4.0 beta2 Proxmox VE 4.0 beta1 Proxmox VE 3.4 Proxmox VE 3.3 Proxmox VE 3.2 Proxmox VE 3.1 Proxmox VE 3.0 Proxmox VE 2.3 Proxmox VE 2.2 Proxmox VE 2.1 Proxmox VE 2.0 Proxmox VE 1.9 Proxmox VE 1.8 Proxmox VE 1.7 Proxmox VE 1.6 (updated) - ISO Installer. with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.6 - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5 Proxmox VE 1.5 New Kernel 2.6.24 and 2.6.32, including KVM 0.12.4 and gPXE Proxmox VE 1.5 Proxmox VE 1.4 Proxmox VE 1.4 beta2 Proxmox VE 1.4 beta1 Proxmox VE 1.3 Proxmox VE 1.2 Proxmox VE 1.1 Proxmox VE 1.0 - First stable release Proxmox VE 0.9beta2 Proxmox VE 0.9 15.4.2008: First public release 22.06.2015: removing Support for OpenVZ - adding LXC (=> 3.x Kernel) 19.02.2015: add ZFS support 11.12.2015: current Release based on Debian Jessie 8.2.0 Linux kernel 4.2.6 http://pve.proxmox.com/wiki/Roadmap
Hyper-V Citrix XenServ Guest operating system support Windows and Linux (KVM) Other operating systems are known to work and are community supported Windows, Linux, UNIX Modern Windows OS, Linux support is limited Most Windows OS, support is limit Open Source yes no no yes Linux Containers (LXC) (known as OS Virtualization) yes no no no Single-view for Management (centralized control) yes Yes, but requires dedicated management server (or VM) Yes, but requires dedicated management server (or VM) yes Simple Subscription Structure Yes, one subscription pricing, all features enabled no no no High Availability yes yes Requires Microsoft Failover clustering, limited guest OS support yes Live VM snapshots: Backup a running VM yes yes limited yes Bare metal hypervisor yes yes yes yes Virtual machine live migration yes yes yes yes Max. RAM and CPU per Host 160 CPU/2 TB Ram 160 CPU/2 TB Ram 64 CPU/1 TB Ram ?
heart of any Proxmox VE installation. It provides the Proxmox_Cluster_file_system (pmxcfs), a database-driven file system for storing configuration files, replicated in real time on all nodes using corosync. pvedaemon is the REST API server. All API calls which require root privileges are done using this Server. pveproxy is the REST API proxy server, listening on port 8006 - used in PVE 3.0+ onwards. This service run as user 'www- data', and forwards request to other nodes (or pvedaemon) if required. API calls which do not require root privileges are directly answered by this server. pvestatd is the PVE Status Daemon. It queries the status of all resources (VMs, Containers and Storage), and send the result to all cluster members. pve-manager is just a startup script (not a daemon), used to start/stop all VMs and Containers. pve-firewall: Proxmox VE Firewall manages the Firewall(iptables) which works cluster wide. pvefw-logger: Proxmox VE Firewall logger logs the Firewall events. pve-ha-crm is the Proxmox VE High Availability Cluster Resource Manager, he manage the cluster which means there is only one active if a ha-resource is set, and this is the cluster master. pve-ha-lrm This is the Proxmox VE High Availability Local Resource Manager, every node has an active lrm if ha is enabled. https://pve.proxmox.com/wiki/Service_daemons
deb http://ftp.debian.org/debian jessie main contrib # PVE pve-no-subscription repository provided by proxmox.com deb http://download.proxmox.com/debian jessie pve-no-subscription # security updates deb http://security.debian.org/ jessie/updates main contrib
And create a backup of it just in case ;) cp pvemanagerlib.js pvemanagerlib.js.bkup Now open it up and search for “data.status” (use ctrl+w to search): nano pvemanagerlib.js Change if (data.status !== 'Active') { to if (false) {
Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:11:49 2016 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 4 Quorate: Yes Votequorum information ---------------------- Expected votes: 1 Highest expected: 1 Total votes: 1 Quorum: 1 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local)
root@pve2:~# pvecm add pve1 The authenticity of host 'pve1 (10.0.2.15)' can't be established. ECDSA key fingerprint is c5:94:9b:f0:09:db:f9:85:f4:b3:34:73:48:c4:5e:d7. Are you sure you want to continue connecting (yes/no)? yes root@pve1's password: copy corosync auth key stopping pve-cluster service backup old database waiting for quorum...OK generating node certificates merge known_hosts file restart services successfully added node 'pve2' to cluster. > root@pve1:~# pvecm status Quorum information ------------------ Date: Sat Jan 2 21:13:29 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000001 Ring ID: 8 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 2 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.0.2.15 (local) 0x00000002 1 10.0.2.16
mountpoint=/data rpool/data root@pve1:~# gluster volume create data transport tcp pve1:/data pve2:/dat root@pve1:/# gluster volume info Volume Name: data Type: Distribute Volume ID: 63c54a01-32cc-4772-b6c9-1f0afa5656f8 Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: pve1:/data Brick2: pve2:/data also for pve2.