MD Platform Hardware Development Progress

We have made great strides in the hardware development over the past 6 months. The newest version might look very similar than our older versions but pack in a lot of tiny quality improvements, and over the oldest ones quite large improvements infact.

Over the course of the past year following improvements have been made:

  • Firebreathers; New fan module packs
  • Relay board packaging
  • Power Monitoring
  • Ghosts In The Machine (SBC Control)
  • Fan control
  • Integrated Switches
  • Identified CPU Fan types which will fail. Not even question of if, but we know they will fail and are by default replaced new.

and latest development is what we call "Ghosts". These are tiny single board computers, inexpensive and to be installed and retrofitted to each and every board.

A lot of the nitty gritty details is being shared in our Discord, so if you are curious do join our discord. There's a stream from the lab as the madman from Finland prototypes and builds these units, what it takes to develop hardware, the blood and sweat. (and the copious amounts of swear words)

Potential Maintenance Window

Upcoming maintenance window for a large number of MDs is being scheduled, we want to upgrade the oldest systems into latest design. This is because the oldest systems don't even contain remote reboot, let alone fan packs or integrated switches. We did what most succesful businesses do; Release MVP first, then iterate a lot making the platform better.
But with hardware, it requires physically implementing the changes.

Therefore, we are planning to upgrade the oldest systems to new hardware platform and perform pre-emptive maintenance at the same time. It will necessitate some downtime.

We have not yet scheduled this, and it is on the planning board shall we do as one major project as a group effort, or do the maintenance one by one. Both methods have merits. One large project gets everything done in one swell swoop, to the latest design with least amount of total downtime and human effort. One by one would take more staff effort, be slower and could take years, but downtime would be limited to when normal maintenance would occur (3+ units on the platform failed) -- but it could take 10 years before last platform is being maintained. One large project would allow us to upgrade the networking seamlessly as well.

We will inform upfront as well as we can if we decide to go with the one big project. It would cause downtime of ~4-5hrs for each cluster of 8.

One large project is the most likely path we will take with this. In the meantime, some of the oldest systems are being withdrawn from sales inventory as pre-emptive move to mitigate customer experienced downtime.

We are also planning to utilize the Ghosts further than just a "USB to Ethernet" bridge in future, but hardware development is slow progress. We found means to potentially offset the SBC cost 100%, and then the question is only how to leverage the processing power and GPIO availability. Think Fan PWM control programmatically, collecting power usage stats (even node by node is possible!).



Monday, April 28, 2025

« Back