Kubernetes Cluster

Lester Vecsey

Background

The basis of this project is a Ceph FS (filesystem) and a reasonable high speed network.

The actual project is a master server with a dual port 100 GbE network card, and two worker nodes that are also capable of 100 GbE network connectivity.

A 10 GbE network card servers as an uplink to the rest of the network.

Cluster specs

Cores Memory (Gi)
Master 24 64
Worker #1 24 64
Worker #2 24 64
Totals 72 192

Master computer

Item Model Notes or Comments
Chassis Fractal Design Define 7 Mini NewEggBusiness - 15.98" x 8.07" x 15.71"
Motherboard Asus TUF GAMING B760M-PLUS II NewEggBusiness
CPU Intel Core i9-14900KF NewEggBusiness
Memory Corsair DDR5 6600 NewEggBusiness
M.2 Storage Team Group MP44 M.2 2280 1TB PCIe NewEggBusiness
Boot Storage #1 Crucial MX500 (500 GB) NewEggBusiness
Boot Storage #2 Crucial MX500 (500 GB) NewEggBusiness
10 GbE NIC TP-Link TX401 NewEggBusiness
PSU Corsair RM650 Amazon
CPU Cooler Noctua NH-U12a chromax.black Amazon
10ft Red CAT6 cable Amazon - used for uplink from cluster
100 GbE NIC Intel E810-CQDA2 ServerOrbit
Fiber Optic cables (Quantity 2) FS

An uninterruptable power supply by CyberPower, model CP1500PFCLCD was also added to the list. It will power all three servers.

Also required was a 5-tier shelf from Temu, to hold everything. A Wi-Fi printer was optional.

Worker computers

Similar to the above list, however the only storage drive is M.2 which can be 2 TB.

There are no dedicated boot drives because the M.2 will have a partition for that.

It has a single port Intel 100 GbE network card.

There is no TP-Link TX401 in this one, since the only uplink needed is the 100 GbE link.

For the chassis I went with more of a cube shape, with the Node 804 model.

Misc. items

Network wiring

Fiber optic interconnects connect between the master server, and each worker node. Thus no intermediary switch is actually needed, for this very tiny or minimal cluster configuration.

Software configuration

Active pods

Future directions