ZL Compute Cluster Update
After much ebaying, prodding and plotting, I managed to build the HP ZL switch compute environment I'd periodically thought of since I read the product listings for the VMWare module back in 2011.
The current setup is built in a 5412zl chassis with the following modules:
All the nodes have had the main SATA drive upgraded to a 1.92tb SSD, with Debian + Proxmox installed on the first ~110GB and the remainder allocated as CEPH OSDs. Each node has the two 10GBE ports in an active LACP lag due to the automatic rate limiting. A "dream" cluster in a box oddball setup.
( Performance, woes and possible retirement )
While it may be shorter lived than I hoped, I am glad that I finally got to not only build my crazy idea, but use it every day for almost a year so far. I call that a success in itself.
The current setup is built in a 5412zl chassis with the following modules:
Slot | Module | Slot | Module |
A | HP J9154A Services zl Module | B | HP J9154A Services zl Module |
C | D | HP J9536A 20p GT PoE+/2p SFP+ | |
E | HP J9857A Adv Svs v2 zl Module | F | HP J9536A 20p GT PoE+/2p SFP+ |
G | H | HP J9536A 20p GT PoE+/2p SFP+ | |
I | J | Module L Extension Slot | |
K | L | HP J9543A ONE Ext Svs zl Module |
All the nodes have had the main SATA drive upgraded to a 1.92tb SSD, with Debian + Proxmox installed on the first ~110GB and the remainder allocated as CEPH OSDs. Each node has the two 10GBE ports in an active LACP lag due to the automatic rate limiting. A "dream" cluster in a box oddball setup.
( Performance, woes and possible retirement )
While it may be shorter lived than I hoped, I am glad that I finally got to not only build my crazy idea, but use it every day for almost a year so far. I call that a success in itself.