Pulsed Media Announcements https://pulsedmedia.com/clients/announcements Latest announcements from Pulsed Media <![CDATA[Been missing our emails? We found and fixed why.]]> We discovered that a significant portion of our outgoing emails; invoices, ticket replies etc. Were silently being dropped for an extended period. Our internal email kept working fine, which masked the problem from our side.

This has been fixed. All outgoing e-mail is delivering normally again.

We know some of you have missed important emails, such as invoices. We are sorry, this is our failure.

If you've been missing emails from us, open a ticket and we'll audit your account to check for any missed invoices, replies, or notifications. Full postmortem coming soon.

]]>
Wed, 18 Mar 2026 12:41:00 +0000 https://pulsedmedia.com/clients/announcements/657/been-missing-our-emails-we-found-and-fixed-why
<![CDATA[Väinämöinen is in semi-autonomous mode instead of full autonomous]]> Internal testing found couple of structural issues which needs some heavy duty engineering before autonomous system can be re-enabled.

It is multi-fold challenge, and a recurrent one with the backend system we have been using.

To maintain the quality at the highest of standards we have to put it on semi-autonomous mode for a while, for human in the loop reviews while re-architecting some core parts.

Full internal usage is still active, but we are interactively processing the tickets for now.

]]>
Sun, 15 Mar 2026 05:20:00 +0000 https://pulsedmedia.com/clients/announcements/656/v-in-m-inen-is-in-semi-autonomous-mode-instead-of-full-autonomous
<![CDATA[Väinämöinen Support Response Time Variability, daily challenges]]> Support response median 57minutes, down from 1h 33minutes a day ago.

We just wrote yesterday about the Memory System, and the memory system is already better than yesterday. New deterministic, and cheap fuzzy search layer AND vastly improved performance on the first cheap layer. Shows up long term.

Everyday, new challenge arises. Today it was GET TICKET API timeouts, without researching the API timeouts were set to 30seconds. We added force searching other similar tickets, ie. the tool just returns similar tickets from past without asking. The search tooling, fetching etc. caused the get ticket request time to balloon.

Yesterday, it was noted that something flipped back to sequential single ticket processing. That ticket happened to have a fail, and backoff curve was flaky. Other tickets did not get processed.

The day before yesterday, it was trying to launch parallel jobs for multiple tickets, but all did FLOCK on single global lock for single ticket.

A week ago there was some hallucinated circuit breakers in there stopping all ticketing.

Originally the agent coded everything in bash, it was thousands of lines of code for simple launch "handle ticket #483743" command, codex rewrote and simplified for PHP. But these complexities creep in.

Please bear with us while we solve these. Yes, this is chaotic, but fast paced development. Structure arises slowly but surely, and things adapt. This is evolution! (Say it in The 300, Spartan style voice!)

Drop a ticket, and see for yourself how efficient is. Maybe ask it to deliver some ice cream for your seedbox?
Make the funniest ticket, at end make a human request on it linking this announcement, and we'll give you a little bit of credit if it makes the meatspace agents (humans) chuggle, might even use for marketing material. Join us on Discord for live heartbeat and chuggles too.

]]>
Fri, 13 Mar 2026 08:46:00 +0000 https://pulsedmedia.com/clients/announcements/655/v-in-m-inen-support-response-time-variability-daily-challenges
<![CDATA[Please, Please Make Support Tickets! -- No Wait!! I Am Serious]]> We are starving for support tickets right now. Hungry, insatiated. Please send them! We are starving!

 

No, i am not joking. I ... kind of wish ... well i do not even wish i was.

 

So with Väinämöinen, we are not getting too much of data right now to tune the internal systems, build the knowledge etc. and there is a distinct underestimation of Väinämöinen. While it is true, i am personally wrangling more wisdom into it each day, but there are moments when the signal gets buried into noise of construction, the construction of Väinämöinen.

Built over the past few days a new (fairly expensive) layer of memory into Väinämöinen; Hunches. These are the equivalent of gut instinct in Humans.
Flashbacks, quick memories etc.

And most of the hit rate is on internal devs, instead of sysadmin or customer support -- the two most important axis' to develop.

 

So please, put Väinämöinen on it's paces. Try to ask it things, unfathomable things, even to manage the things for you. Just send an support ticket in. We are lusting for the internal processing data from live action, which cannot be recreated from the immense number of past support tickets handled -- it has to be the live reasoning.

Ask even unreasonable things, like to align the position of the moon favorably for you! I don't know, perhaps the winning lottery numbers for the next week? Now i know, ask it to set up your home NAS or Streaming box against one of our seedboxes or storage boxes!

The types of tickets you make -- better we get on those specific tasks.


Thanks!




PS. You can shoot message on the discord how it did, or maybe a review on your favorite forum, twitter, reddit or even trustpilot. 

]]>
Wed, 04 Mar 2026 14:16:00 +0000 https://pulsedmedia.com/clients/announcements/654/please-please-make-support-tickets-no-wait-i-am-serious
<![CDATA[Meet Väinämöinen: Your Autonomous Support Agent]]> Meet Väinämöinen, the legendary Finnish wizard who inspired Tolkien's Gandalf—now autonomously managing Pulsed Media's 400 servers, two datacenters, and customer support tickets in 35 minutes, 24/7. Discover how this groundbreaking AI transformed our service quality and ticket economics overnight.

For you this means you get faster and more thorough answers, faster and more complex issues solved, without burnout or holidays affecting quality. Väinämöinen checks your full history every time, and decides what to do. No canned responses, he seems to even refuse using the old predefined replies. He knows no one likes those, he goes a mile beyond. Things we could have never done manually before, now simply solved.

Read the full article at Väinämöinen: Autonomous AI Sysadmin Transformed Support Costs with 91% Autonomy | Pulsed Media

 

]]>
Sat, 28 Feb 2026 11:27:00 +0000 https://pulsedmedia.com/clients/announcements/653/meet-v-in-m-inen-your-autonomous-support-agent
<![CDATA[Powering Forward: Massive Development Sprint + Status Update]]> Pulsed Media: Building Mode Engaged

We're aggressively upgrading Pulsed Media infrastructure and backend systems to deliver better reliability, deeper automation, and expanded capacity. Here's what's happening and what it means for you.

The TL;DR

Support response temporarily slower due to intensive platform upgrades. Manual MD series fulfillment included. Actively catching up now. Your patience pays off in major infrastructure gains.

Important: Critical issues (outages, service interruptions) still receive immediate priority.

 

Why? We're Building Serious Infrastructure

Agentic Development Pipeline

We've deployed a near-closed-loop AI-assisted development pipeline on PMSS. Agents identify issues, create detailed reports, implement fixes, and verify results—all under human oversight. This isn't about pushing updates faster; it's about systematic quality improvement and catching problems before they affect you.

Real example: An AI agent completed a full production server dist-upgrade across 2 major Debian versions—including verification and fixing errors along the way—in 58 minutes. During that process, it identified a template bug but lacked GitHub access to report it. Human reviewed the logs, granted access, and Issue #137 was created. A separate agent then fixed the bug and verified the fix.

The PMSS repository now tracks 80+ issues identified through systematic codebase analysis. This represents technical debt being actively eliminated, not neglected problems.

Hardware: mPlate Platform Maturing

Custom power + sensor boards: Single PCB integrating power delivery, sensors, and management into one board. Eliminates 2+ hours of manual wiring per unit and removes wiring-related failure points entirely. This is in the specification design phase being finalized now.

Custom rail kits: First batch of 44 produced and in testing. Designed for 65-90cm rack depth flexibility. Nothing on the market matched our requirements at sensible prices, so we built our own.

mPlate V3: Just arrived from fabrication. Simplified, production-ready. Hardware iteration takes multiple revisions to get right - this one's dialed in.

Inventory Ready To Deploy

Petabytes of HDDs and hundreds of nodes already in inventory. Target: majority online by Q4/2026, with reduced lead times and better availability once deployment ramps up. We're also evaluating a major expansion order for late-year delivery.

Datacenter 2: Build Complete

Second datacenter structure is finished. Remaining work: cooling hookup, automation, and configuration. Ready to receive inventory. Once we begin filling it, expect 3-4 racks deployed rapidly—significantly expanding our total capacity.

 

What This Means For You

  • Short-term: Temporarily slower support response times. Your patience directly enables these infrastructure improvements.
  • Medium-term: Faster MD series fulfillment, increased stock availability, more predictable delivery timelines.
  • Long-term: More reliable infrastructure, systematically improved software quality, and substantially expanded capacity.

Pulsed Media's Position

We're investing heavily because we're in this for the long haul. 16 years in business, financially stable, and building infrastructure that positions us for the next decade. This sprint is one step in a larger strategy to remain the go-to provider for data sovereignty and storage hosting.

We're building the data sovereign future. Thanks for your patience.

]]>
Tue, 27 Jan 2026 07:22:00 +0000 https://pulsedmedia.com/clients/announcements/652/powering-forward-massive-development-sprint-status-update
<![CDATA[PMSS 2026-01-21 Released – Major Stability & Security Update Now Available!]]> PMSS 2026-01-21 Released – Major Stability & Security Update!

We're proud to announce the release of PMSS 2026-01-21, our largest update ever. After occasional maintenance throughout 2024 and early 2025, intensive development began in September 2025 — resulting in 622 commits touching 490+ files. This release brings substantial improvements to stability, security, documentation, and operational reliability.

Key Benefits for You and Your Users:

  • Seamless Debian Upgrades: Reliable and fully automated support for Debian 11 (Bullseye) and Debian 12 (Bookworm) upgrades from Debian 10.
  • Improved Docker Stability: Enhanced rootless Docker environment ensures better container uptime and fewer operational headaches.
  • Safer and More Reliable Updates: Redesigned updater mechanism with built-in protections, timeouts, and structured logging to prevent disruptions.
  • Enhanced Security Transparency: Two previously identified security vulnerabilities have been completely addressed, documented, and resolved.
  • Comprehensive Documentation: Extensive new documentation covering installation, updates, maintenance, recovery procedures, and architectural decisions.
  • Smooth Operational Experience: Improved overall reliability with preflight checks, bounded command execution, and fail-soft update philosophy.
  • Per-User Resource Limits: Cgroup guardrails with sensible memory floors/caps prevent runaway processes from affecting neighbors.

What's New in This Update:

Multi-Distro Support

  • Official support and smooth transition path for Debian 11 and Debian 12 users.
  • Continued legacy support for Debian 10 with security updates.
  • Native WireGuard kernel module support on Debian 12.

Installer & Updater Overhaul

  • Completely reworked install.sh — TTY detection means prompts work correctly even when piped via wget | bash.
  • Preflight checks catch problems early: disk space, privileges, distro compatibility.
  • Fail-soft updates: bounded execution prevents hangs, individual failures don't abort the whole run.
  • Automation-ready with --dry-run, --non-interactive, --scripts-only flags.

Enhanced Rootless Docker

  • Improved integration and management via systemd.
  • Automatic recovery via watchdog supervision reduces downtime for containerized applications.
  • fuse-overlayfs defaults on legacy kernels for broader compatibility.

Comprehensive Documentation

  • New guides: installation, updates, maintenance procedures, recovery steps.
  • 7 Architecture Decision Records (ADRs) documenting engineering decisions.
  • Full incident reports with root cause analysis and corrective actions.
  • WireGuard VPN setup and peer management documentation.

Proactive Security Measures

How to Upgrade Your Server

Release vs Testing: The release channel is the stable version, tested and recommended for production. The git/main channel tracks bleeding-edge development — useful for testing new features but may contain regressions.

Managed Seedbox Customers:

We progressively roll out updates to ensure reliability. If you'd like your server updated immediately, simply open a support ticket and request:

"Update PMSS to release 2026-01-21"

Self-Hosted & Dedicated Users:

Fresh Install:

wget -qO- https://github.com/MagnaCapax/PMSS/raw/main/install.sh | bash

Update to Stable Release:

/scripts/update.php release

Update to Testing/Bleeding Edge:

/scripts/update.php git/main

Dist-Upgrade (Debian 10?11 or 11?12):

/scripts/update.php --dist-upgrade=11 --tty  # or =12

View Complete Release Notes | Review All 622 Commits

We remain committed to providing the most reliable and secure seedbox experience. Thank you for trusting Pulsed Media.

The Pulsed Media Team

]]>
Thu, 22 Jan 2026 02:13:00 +0000 https://pulsedmedia.com/clients/announcements/651/pmss-2026-01-21-released-major-stability-security-update-now-available
<![CDATA[PMSS: Rare 14 Year Old Bug Found and Resolved on Single Node. Catastrophic rm -rf /home]]> A Bug 14 Years In The Making

This bug had existed since late 2011 at the very least, and the impact on a single node when it happens was catastrophic rm -rf /home. The good news is; Already patched, already in distribution and it required many things to actually happen. Only one server was impacted.

IMPACT: Single seedbox server, only users on that server were impacted. Complete deletion of /home user directories.

The bug is now fully fixed in PMSS, and the exact failure path that allowed this to happen has been removed.

The story goes a bit like this

Users opened tickets about 502 Bad Gateway errors. Typical, just lighttpd crashed? But no this was persistent. Few more from same node.
Logging in, /home showed almost no data. Directories gone. No, it actually is mounted not a for some reason not mounted issue neither. Data drives are intact too.

Here begins a frantic hunt taking most of the day, presuming the worst possible; Severe security issues. The very first instinct, security breach? Node rooted? Absolutely zero indications in any of the logs (auth, ssh etc.) of any kind of sign of external compromise was found. Nothing. Not a beep. Just crickets. Crickets in this case is the typical botnets trying to brute force entries, that's it. Nothing unfamiliar in login IPs, users, no suspicious commands, no user deletions.

Baffled how this could happen, grepping logs etc. found there was a system update in progress around the time this happened, and weird flood of errors on logs. but nothing related. Some script was failing because a require_once() for a library failed. Okay, noted, but that still didn’t explain /home.

Wait, Wait! Why is there a .quota file in /home?

That file is for user accounts, per-user, and should never exist directly under /home. File is empty but exists, should not even exist. Makes no sense.

cron/updateQuotas.php: Takes user list, iterates it. It has rm -rf /home/{$thisUser}/.quota; quota -u {$thisUser}.... Okay, nothing to worry about. Completely harmless, and as it has been always.

It takes data from our listUsers.php script so all fine right. NO; listUsers.php had started throwing PHP fatals with stack traces. Isolating the strings, formulating the resulting commands and there it was rm -rf /home/  thrown in ..... /.quota --- that's it! Malformed input.

Taking the exact error line from the logs, dropped it into $thisUser, and reconstructed the shell command. The result included the smoking gun:

rm -rf /home/ thrown in /scripts/lib/user/userFilesystem.php on line 95/.quota

In a shell, that turns into rm -rf /home/ ... - That’s the moment it was exactly clear what happened.

We had 100% trust our own scripts to return valid data and handle all the sanitization BUT this script was recently changed and something caused it to error on this server: An error about missing file. That script would result in rm -rf /home/ and also removing that said file. Our updateQuotas did not validate input from a trusted source, why would it? -- well because it might throw an error one day in future; and technically that is not internal code (library call we could use), but external (script executed through shell); It returns a string which we minimally parse and not a structured data array (like internal library call would).

The full incident report is in our repo, if you want to read it:  PMSS/docs/incidents/2025-12-08-home-wipe-updateQuotas-listUsers.md

Hardened all the inputs

We hardened all the inputs, and the listUsers.php itself.
Multiple ways of mitigating, including hardening the update process.

Probabilities

The probabilities for this to happen are very slim, as evidenced by 14 years in existence before it happened the first time. The conditions required to trigger this were insanely specific, and the failure path has now been removed. We do not expect this to recur.

Patch Implemented

We have patched this, and similar issues hardened in many places. Many oldest servers already received the update.
We might harden our stance on rolling updates on this, or just update the affected pieces with be-spoke hot fix on servers with older versions. A dozen or two nodes have already been updated.

 

What do you need to do?

Nothing. If you were affected, we are contacting shortly.
If you were not affected, your service is working normally with all your data in place.

]]>
Thu, 11 Dec 2025 14:45:00 +0000 https://pulsedmedia.com/clients/announcements/650/pmss-rare-14-year-old-bug-found-and-resolved-on-single-node-catastrophic-rm-rf-home
<![CDATA[PMSS Regression in resource limits in latest versions. Is your memory set to 500MB? [EDIT 1: Solved]]]> There is an regression where resource limits do not get fully applied, namely the memory limits as most obvious.
This means your memory limit is defaulted to 500MB.  

You can see this on the welcome page or info tab.

AFFECTED UNITS: Those on newest PMSS stack, with system Wireguard etc.

We are working on the fix, and will begin rollout immediately when it is done.

 

---

Other regression is that Docker on info page shows disabled on all systems, even if it works. Overlayfs had some issuess on Debian 11 systems as well.

We are fixing all of these one by one, but it takes time and rolling the patches out takes time as well.
This is exactly the reason why we do rolling releases, to catch edge cases.

 

----

EDIT 1: Issue has been resolved, but rolling out will take a time. Just open a ticket if you got this issue and we'll put the latest updates on the server you are on as soon as we can.

]]>
Wed, 03 Dec 2025 06:46:00 +0000 https://pulsedmedia.com/clients/announcements/649/pmss-regression-in-resource-limits-in-latest-versions-is-your-memory-set-to-500mb-edit-1-solved
<![CDATA[Seedbox & Storage Box Software Updates; A LOT Has Changed In Short Time!]]> Pulsed Media Software Stack (PMSS) Updates 11/2025

A LOT has changed in very short timeframe with PMSS.

We prioritize stability, robustness, reliability above everything, as always. 

  • WireGuard included by default
  • SysAdmin + End User (Tenant) Docs!  -- Not just the wiki anymore
  • New Media Install Stack!
  • Info / Stats Page Updated! New Stats and Info. Better looking.
  • Installer can be ran non-interactive now
  • Debian 11 support is considered now stable and mature
  • Debian 12 support is highly experimental, but just might work
  • SystemTest: Check dependencies and System Status
  • Storage Benchmark / Storage Health Snapshot for sysadmins
  • Update Process Overhaul: More Robust, Faster, Observability and logging.
  • Update Dist-Upgrade Implemented
  • Package Install Overhaul
  • Cgroups V2; Tested, but for Docker Rootless we are still using Cgroups v1
  • Automatic and agentic CI testing and bug fixing.

Full changelog available as usual at Commits · MagnaCapax/PMSS

You can request your server to be updated via a support ticket. Otherwise normal rolling release over time.

 

PMSS On your Own Dedicated Server

You can always maintain and run PMSS yourself on your own dedicated server anywhere. For your own private use, or even to offer seedbox services to anyone. Install a minimal Debian 11 base system first.

 

To install:

wget -q https://github.com/MagnaCapax/PMSS/raw/main/install.sh; bash install.sh

Currently we recommend updating with:

wget -qO /scripts/update.php https://raw.githubusercontent.com/MagnaCapax/PMSS/main/scripts/update.php;  chmod u+x /scripts/update.php; /scripts/update.php git/main;

 

 

Current Codebase Status

Snapshot

Scripts PHP:           13883
Scripts Bash:           1755
Scripts other:          4542
Tests:                  5141
Top-level Bash:          506
Root docs:               332
Docs ADR:                101
Docs other:             1551
Automation:              176
Config etc:             4364
Root config:             759

Accounted total:       33110
Tracked total:         33110


Advisory complexity (Bash only) Bash files analyzed: 29 Aggregate complexity: 521 scripts/testing/quick-php73.sh code=8 complex=5 density=63/100loc scripts/testing/check-tools.sh code=10 complex=6 density=60/100loc scripts/testing/short-open-tag-lint.sh code=23 complex=9 density=39/100loc scripts/util/setRtorrentUploadRate.sh code=19 complex=7 density=37/100loc scripts/cli/fix-exec-bits.sh code=14 complex=5 density=36/100loc scripts/cli/ci-logs.sh code=56 complex=20 density=36/100loc scripts/testing/phpstan.sh code=23 complex=8 density=35/100loc scripts/testing/docblock-lint.sh code=80 complex=27 density=34/100loc Advisory complexity (PHP heuristic) PHP files analyzed: 230 Aggregate complexity: 3570 scripts/util/storageBenchmark.php code=127 complex=162 density=128/100loc scripts/lib/update/apps/filebot.php code=11 complex=7 density=64/100loc scripts/cron/cgroupRootCheck.php code=22 complex=14 density=64/100loc scripts/util/setupLetsEncrypt.php code=30 complex=15 density=50/100loc scripts/lib/update/apps/openvpn.php code=88 complex=43 density=49/100loc scripts/lib/networkInfo.php code=31 complex=15 density=48/100loc scripts/terminateUser.php code=80 complex=35 density=44/100loc scripts/lib/user/integrations.php code=9 complex=4 density=44/100loc scripts/lib/user/helpers.php code=9 complex=4 density=44/100loc scripts/lib/update/services/security.php code=32 complex=14 density=44/100loc

 

Docker Status: Fixed

Having issues with Docker Rootless not running? It was because of Systemd changes. We found the solution, but this has to be manually on on server by server basis, changing kernel paremeters and a reboot.

So let us know if you have issues, we will implement the changes.

]]>
Sat, 22 Nov 2025 08:26:00 +0000 https://pulsedmedia.com/clients/announcements/648/seedbox-storage-box-software-updates-a-lot-has-changed-in-short-time
<![CDATA[Media Package Update! -- Jellyfin, Sonarr, Radarr, Prowlarr]]> We have updated the old arr_installation mentioned at prior in our announcements, "All-in-One *ARR + Jellyfin Script".

You can find the new one at: PMSS/etc/skel/install-media-stack.sh at main · MagnaCapax/PMSS
And the commit message and detaisl at install-media-stack.sh to replace old arr_install.txt · MagnaCapax/PMSS@c313175

We also provide by default globally installed Sonarr and Radarr, found at /opt of all servers and simply type arrinfo in the cli.

 

To install, if old account and no update on your server yet;

https://raw.githubusercontent.com/MagnaCapax/PMSS/refs/heads/main/etc/skel/install-media-stack.sh; bash install-media-stack.sh

If the file already exists, simply execute it on cli.

WARNING: You are still responsible to maintain and operate this yourself, this is a courtesy and starting point and not a managed solution.

That being said, any issues please open an issue at Github and we'll do our best to check it out.

 

]]>
Tue, 11 Nov 2025 10:10:00 +0000 https://pulsedmedia.com/clients/announcements/647/media-package-update-jellyfin-sonarr-radarr-prowlarr
<![CDATA[Quota full but not using all space? Lots of small files -> INODE Quota Reached.]]> We’ve seen a rise in users hitting their file quota (inode limit) before reaching their full disk quota. This is expected behavior for workloads with millions of small files—common with Plex metadata, rclone VFS cache, *arr misconfigurations, AI datasets, comic/ROM/ebook libraries, etc.

?Your plan includes two quotas:

 - Disk space (GiB/TiB)
 - File count (inodes = number of files)

Example: with a ~2.5?TiB plan and ~1.6M file limit, you’d need to average ~1.6?MiB per file to fully utilize the space. If your average file is 0.5?MiB, you’ll hit the file cap at ~? capacity.

 

?? What You Can Do

 - Plex/Jellyfin: disable preview thumbnails, intro detection, lyric/metadata agents
 - rclone: increase --vfs-cache-chunk-size to 64–128?MiB, clear old cache
 - *arr stack: use hardlinks, not copies
 - Tiny file libraries:  pack into squashfs/.tar to reduce file count drastically
 - Need more inodes? You need disk quota upgrade (Either buy Extra or Upgrade your plan for bigger one) for now. We are evaluating higher inode caps on SSD/NVMe-backed plans, since they handle metadata workloads better. No ETA yet.

Shell Commands (SSH/CLI) to check your usage:

Run this to check your average file size:

find ~ -xdev -type f -printf '%s\n' | awk '{c++;s+=$1} END{printf "Files: %d | Avg: %.2f KiB\n",c,s/1024/c}'

To check quota:

quota -s

To find inode usage by directory:

du --inodes -h --max-depth=3 ~ | sort -hr | head -40

Find directories with very high file counts

find ~ -xdev -type d -printf '%h/%f %k KiB %c files\n' 2>/dev/null | \
  awk '{print $1,$NF}' | sort -k2 -n | tail -20

 

?? Why Limits Exist

This file limit exists to protect system performance and stability. Millions of tiny files cause:
 - Metadata IOPS spikes: constant reads/writes to file tables
 - RAM bloat: Linux caches millions of inodes and dentries
 - Disk churn: especially painful on RAID5: small metadata writes = read?modify?write cycles w/ heavy write amplification

If we remove inode limits, a single user will degrade performance for everyone. We keep the platform lean and fast by enforcing sane, fair limits.

 

]]>
Wed, 03 Sep 2025 11:54:00 +0000 https://pulsedmedia.com/clients/announcements/646/quota-full-but-not-using-all-space-lots-of-small-files-inode-quota-reached
<![CDATA[MD Platform Hardware Development Progress and Potential Maintenance Window]]> MD Platform Hardware Development Progress

We have made great strides in the hardware development over the past 6 months. The newest version might look very similar than our older versions but pack in a lot of tiny quality improvements, and over the oldest ones quite large improvements infact.

Over the course of the past year following improvements have been made:

  • Firebreathers; New fan module packs
  • Relay board packaging
  • Power Monitoring
  • Ghosts In The Machine (SBC Control)
  • Fan control
  • Integrated Switches
  • Identified CPU Fan types which will fail. Not even question of if, but we know they will fail and are by default replaced new.

and latest development is what we call "Ghosts". These are tiny single board computers, inexpensive and to be installed and retrofitted to each and every board.

A lot of the nitty gritty details is being shared in our Discord, so if you are curious do join our discord. There's a stream from the lab as the madman from Finland prototypes and builds these units, what it takes to develop hardware, the blood and sweat. (and the copious amounts of swear words)

Potential Maintenance Window

Upcoming maintenance window for a large number of MDs is being scheduled, we want to upgrade the oldest systems into latest design. This is because the oldest systems don't even contain remote reboot, let alone fan packs or integrated switches. We did what most succesful businesses do; Release MVP first, then iterate a lot making the platform better.
But with hardware, it requires physically implementing the changes.

Therefore, we are planning to upgrade the oldest systems to new hardware platform and perform pre-emptive maintenance at the same time. It will necessitate some downtime.

We have not yet scheduled this, and it is on the planning board shall we do as one major project as a group effort, or do the maintenance one by one. Both methods have merits. One large project gets everything done in one swell swoop, to the latest design with least amount of total downtime and human effort. One by one would take more staff effort, be slower and could take years, but downtime would be limited to when normal maintenance would occur (3+ units on the platform failed) -- but it could take 10 years before last platform is being maintained. One large project would allow us to upgrade the networking seamlessly as well.

We will inform upfront as well as we can if we decide to go with the one big project. It would cause downtime of ~4-5hrs for each cluster of 8.

One large project is the most likely path we will take with this. In the meantime, some of the oldest systems are being withdrawn from sales inventory as pre-emptive move to mitigate customer experienced downtime.

We are also planning to utilize the Ghosts further than just a "USB to Ethernet" bridge in future, but hardware development is slow progress. We found means to potentially offset the SBC cost 100%, and then the question is only how to leverage the processing power and GPIO availability. Think Fan PWM control programmatically, collecting power usage stats (even node by node is possible!).

]]>
Mon, 28 Apr 2025 13:23:00 +0000 https://pulsedmedia.com/clients/announcements/645/md-platform-hardware-development-progress-and-potential-maintenance-window
<![CDATA[Helsinki DC Electrical Maintenance: Over but work remains]]> Several legacy systems have given up the ghost during this, surprisingly many things.

Some servers just need a manual filesystem check.

We are working on it all, but this will take some time.

 

]]>
Thu, 10 Apr 2025 09:27:00 +0000 https://pulsedmedia.com/clients/announcements/644/helsinki-dc-electrical-maintenance-over-but-work-remains
<![CDATA[Helsinki DC Electrical Maintenance 4.9.2025 23:00 - 03:00]]> There will be electrical maintenance in the Helsinki DC 4.9.2025 23:00 to 03:00.

This is periodically required transformer maintenance.

This will cause servers to be rebooted potentially twice, and a subset of servers might remain down for the duration.

 

]]>
Fri, 04 Apr 2025 07:31:00 +0000 https://pulsedmedia.com/clients/announcements/643/helsinki-dc-electrical-maintenance-4-9-2025-23-00-03-00