public/infrastructure/acer30/hardware.md
2023-04-17 21:23:41 +02:00

14 KiB
Raw Permalink Blame History

[TOC]

Note: Inverse chronological order

2023-4

2 x https://mikrotik.com/product/crs326_24s_2q_rm

2021-11

4 x Micron® 5210 ION Enterprise SATA QLC SSD

2020-10

trax1, trax2 and trax4 are out, hence, the previous volumes vmstore and vmstore2. trax5, trax6 trax7 and trax8 enter the cluster

graph TD
fxoln-sw1-- 10G ---sw1_stack
fxoln-sw2-- 10G ---sw2_stack
sw1_stack-- 1G ---trax3
sw2_stack-- 1G ---trax3
sw1_stack-- 10G ---trax5
sw2_stack-- 10G ---trax5
sw1_stack-- 10G ---trax6
sw2_stack-- 10G ---trax6
sw1_stack-- 10G ---trax7
sw2_stack-- 10G ---trax7
sw1_stack-- 10G ---trax8
sw2_stack-- 10G ---trax8

vmNVMe

vmNVMe is a replica-3 gluster volume, replica 3 means that it have have 3 copies, read more

graph BT

trax5---d51[(disk 5-1 - 1.5TB)]
trax5---d52[(disk 5-2 - 1.5TB)]
d51-->zfs5[(trax5's ZFS vdev stripe)]
d52-->zfs5
zfs5-->brick51[(gluster brick 5-1)]
zfs5-->brick52[(gluster brick 5-2)]
zfs5-->brick53[(gluster brick 5-3)]

trax6---d61[(disk 6-1 - 1.5TB)]
trax6---d62[(disk 6-2 - 1.5TB)]
d61-->zfs6[(trax6's ZFS vdev stripe)]
d62-->zfs6
zfs6-->brick61[(gluster brick 6-1)]
zfs6-->brick62[(gluster brick 6-2)]
zfs6-->brick63[(gluster brick 6-3)]

trax7---d71[(disk 7-1 - 1.5TB)]
trax7---d72[(disk 7-2 - 1.5TB)]
d71-->zfs7[(trax7's ZFS vdev stripe)]
d72-->zfs7
zfs7-->brick71[(gluster brick 7-1)]
zfs7-->brick72[(gluster brick 7-2)]
zfs7-->brick73[(gluster brick 7-3)]

trax8---d81[(disk 8-1 - 1.5TB)]
trax8---d82[(disk 8-2 - 1.5TB)]
d81-->zfs8[(trax8's ZFS vdev stripe)]
d82-->zfs8
zfs8-->brick81[(gluster brick 8-1)]
zfs8-->brick82[(gluster brick 8-2)]
zfs8-->brick83[(gluster brick 8-3)]

brick51-->brickg1[(gluster group #1)]
brick61-->brickg1
brick71-->brickg1

%% thanks https://github.com/mermaid-js/mermaid/issues/487#issuecomment-401302073
style brick51 fill:#FFFF00
style brick61 fill:#FFFF00
style brick71 fill:#FFFF00
style brickg1 fill:#FFFF00

brick81-->brickg2[(gluster group #2)]
brick52-->brickg2
brick62-->brickg2

style brick81 fill:#00FFFF
style brick52 fill:#00FFFF
style brick62 fill:#00FFFF
style brickg2 fill:#00FFFF

brick72-->brickg3[(gluster group #3)]
brick82-->brickg3
brick53-->brickg3

style brick72 fill:#FF00FF
style brick82 fill:#FF00FF
style brick53 fill:#FF00FF
style brickg3 fill:#FF00FF

brick63-->brickg4[(gluster group #4)]
brick73-->brickg4
brick83-->brickg4

style brick63 fill:#00FF00
style brick73 fill:#00FF00
style brick83 fill:#00FF00
style brickg4 fill:#00FF00

brickg1-->d[(vmNVMe - 3.8 TB)]
brickg2-->d
brickg3-->d
brickg4-->d

refactor NFS trax3 storage

this storage is intended for extended storage through NFS VM disks and/or backups

graph TD

trax3---d31[(disk 5-1 - 10.9TiB)]
trax3---d32[(disk 5-2 - 10.9TiB)]
d31-->zfs3[(trax3's zfs mirror - 10.9 TiB)]
d32-->zfs3

2020-4

graph TD
fxoln-sw1-- 10G ---sw1_stack
fxoln-sw2-- 10G ---sw2_stack
sw1_stack-- 1G ---trax1
sw2_stack-- 1G ---trax1
sw1_stack-- 1G ---trax2
sw2_stack-- 1G ---trax2
sw1_stack-- 1G ---TRAX3
sw2_stack-- 1G ---TRAX3
sw1_stack-- 10G ---trax4
sw2_stack-- 10G ---trax4
  • Added new PVE node trax4:
    • HP Proliant DL 360p V2 Gen8
    • Processors: 2x Xeon E5-2650v2
    • Memory RAM: 8 x 8 GB 1333 Mhz DDR3 RDIMM (64 GB)
    • 2x SSD 240 GB SATA (no enterprise)
    • 2x HDD 8 TB 7200 rpm SAS HITACHI Enterprise
    • 4x 1 Gbit Intel i350
    • 2x 10 Gbit SFP+ QLogic
    • 70 cm of depth

trax4 is going to be the new routing machine for eXO.

2020-2

graph TD
fxoln-sw1-- 10G ---sw1_stack
fxoln-sw2-- 10G ---sw2_stack
sw1_stack-- 1G ---trax1
sw2_stack-- 1G ---trax1
sw1_stack-- 1G ---trax2
sw2_stack-- 1G ---trax2
sw1_stack-- 1G ---TRAX3
sw2_stack-- 1G ---TRAX3
  • finally we dropped mikrotik hardware Mikrotik RB1100AHx2
  • now upstream switches fxoln-sw1 and fxoln-sw2 are huawei core switches connected with new transceivers

2019-8

graph TD
fxoln-sw1-- 10G ---sw1_stack
fxoln-sw2-- 10G ---sw2_stack
sw1_stack-- 1G ---trax1
sw2_stack-- 1G ---trax1
sw1_stack-- 1G ---trax2
sw2_stack-- 1G ---trax2
sw1_stack-- 1G ---TRAX3
sw2_stack-- 1G ---TRAX3
sw1_stack-- 1G ----mikrotik-hw
sw2_stack-- 1G ----mikrotik-hw
  • switch dlink DGS-3420-28TC were replaced by rented and brand new switch Netgear M43000-8X8F x 2
  • trax1 and trax2 travla chassis got replaced with independent supermicro chassis, as this is a new stable situation, let's mention again all the components.
  • Did photos to hardware trax1, trax2 and trax3

Common components for trax1, trax2 and trax3:

Common components for trax1 and trax2:

Particular components for trax3:

2019-4

graph TD
fxoln-sw1-- 1G ---sw1_stack*
fxoln-sw1-- 1G ---sw2_stack*
fxoln-sw2-- 1G ---sw1_stack*
fxoln-sw2-- 1G ---sw2_stack*
sw1_stack*-- 1G ---mikrotik-hw
sw2_stack*-- 1G ---mikrotik-hw
sw1_stack*-- 1G ---trax1-trax2
sw1_stack*-- 1G ---TRAX3
sw2_stack*-- 1G ---trax1-trax2
sw2_stack*-- 1G ---TRAX3

Those particular dlink DGS3120-48TC x 2 were not particularly working OK. They were replaced by another switches that FXOLN left us temporary: dlink DGS-3420-28TC x 2.

It was also introduced a temporary Mikrotik device to help in the QinQ management

2018-12

graph TD
fxoln-sw1-- 1G ---sw1_stack
fxoln-sw1-- 1G ---sw2_stack
fxoln-sw2-- 1G ---sw1_stack
fxoln-sw2-- 1G ---sw2_stack
sw1_stack-- 1G ---mikrotik-hw
sw2_stack-- 1G ---mikrotik-hw
sw1_stack-- 1G ---trax1-trax2
sw1_stack-- 1G ---TRAX3
sw2_stack-- 1G ---trax1-trax2
sw2_stack-- 1G ---TRAX3

Same hardware, plus 2 dlink switches model DGS3120-48TC that FXOLN have left us.

We moved equipments to a new location rack2 (small) in positions 9-10 full (trax3 and trax1&2 respectively) and 11 half (mikrotik) connected to ports 23 and 24 of edgeswitch. Trax3 is the only one connected to STS (Static Transfer Switch). Rack measures are 47.5 cm of width and 70 cm of depth

2018-2

graph TD
fxoln-sw1-- 1G ---mikrotik-hw
fxoln-sw2-- 1G ---mikrotik-hw
fxoln-sw1-- 1G ---trax1-trax2
fxoln-sw1-- 1G ---TRAX3
fxoln-sw2-- 1G ---trax1-trax2
fxoln-sw2-- 1G ---TRAX3

Replace trax3 server to new equipment

trax1 and trax2

Trax1 and Trax2 (Travla Asus Xeon - then the name born but no longer fits this criteria) are expected to have this way 8 redundant good VMs, and up to 16. Good VMs are considered to have 16 GB disk capacity and 8 GB de RAM.

  • Shared chassis: Travla C147 (manual), Mini-ITX for two motherboards, 2x250W. 0.5 U
    • Misc. Disk options:
      • 2.5'' HD (x2)
      • 3.5'' HD (x1)

Each node (2):

trax3

trax 3 differs from trax1 and trax2 on processor, disks and chassis:

router

Mikrotik RB1100AHx2

rest of 2017

graph TD
fxoln-sw1-- 1G ---mikrotik-hw
fxoln-sw2-- 1G ---mikrotik-hw
fxoln-sw1-- 1G ---TRAX1-TRAX2
fxoln-sw1-- 1G ---trax3-atom-server
fxoln-sw2-- 1G ---TRAX1-TRAX2
fxoln-sw2-- 1G ---trax3-atom-server

trax1 and trax2

Trax1 and Trax2 (Travla Asus Xeon) are expected to have this way 8 redundant good VMs, and up to 16. Good VMs are considered to have 16 GB disk capacity and 8 GB de RAM.

  • Shared chassis: Travla C147 (check manuals directory), Mini-ITX for two motherboards, 2x250W. 0.5 U
    • Misc. Disk options:
      • 2.5'' HD (x2)
      • 3.5'' HD (x1)

Each node (2):

trax3

router

Mikrotik RB1100AHx2

As of 2017-2-21

and this is probably hardware that was there since they eXO started in 2010-2011

graph TD
fxoln-sw1-- 1G ---mikrotik-hw
fxoln-sw2-- 1G ---mikrotik-hw
mikrotik-hw-- 1G ---atom-server

router

Mikrotik RB1100AHx2

server