<![CDATA[Pulsed Media]]> https://pulsedmedia.com/clients/index.php/announcements <![CDATA[Temporary DNS issue fixed.]]> https://pulsedmedia.com/clients/index.php/announcements/639 https://pulsedmedia.com/clients/index.php/announcements/639 Mon, 07 Oct 2024 12:21:00 +0000 There was temporary DNS issue where our master nameserver was not reachable by the global cluster.

This has been fixed now, and services are resolving again.

However some people might have caching issues, and you might need to force refresh for quicker resolution. Some ISPs also ignore DNS zone settings / RFC and caches longer than they are supposed to.

 

Sorry for the inconvenince, and steps has been planned to prevent this from happening in the future.

]]>
<![CDATA[Electrical Maintenance Finally Happening! 28.8 23:30 through 29.8 06:00 [COMPLETED]]]> https://pulsedmedia.com/clients/index.php/announcements/638 https://pulsedmedia.com/clients/index.php/announcements/638 Tue, 27 Aug 2024 11:13:00 +0000 Electrical Maintenance; 28.8 23:30 to 29.9 06:00 Helsinki Time

The electrical maintenance is finally happening tomorrow.
There will be at least 2 full outages, and partial services might be down for prolonged period of time.

See old announcement: https://pulsedmedia.com/clients/index.php/announcements/621/-Breaking-News-Electrical-Maintenance-Postponed-Again...-.html

We are sorry for the inconvenience this causes.

UPDATE 03:24: Maintenance has been completed and we are checking that last few nodes start normally.

 

]]>
<![CDATA[MD Platform Development Update 04/2024: Dual Drive, Active cooling Package, Power Consumption]]> https://pulsedmedia.com/clients/index.php/announcements/637 https://pulsedmedia.com/clients/index.php/announcements/637 Mon, 15 Apr 2024 10:50:00 +0000 MD Platform Development Update 04/2024

It's been a couple months since our last MD development update. We've achieved some progress during this time!

Active Cooling Package / Module

We made first active cooling module for a cluster. This has not been needed prior, but in anticipation of higher power consumptions we wanted to prototype one already.
Found immediately ways to make it better, but already performs fully as expected. No long term data yet, we'll need a little bit more time for that.

Space restrictions makes it tough to build mechanically strong, but we've achieved that. Some fine tuning and building assembly tools remains to be done for that. In the new datacenter these will be completely obsolete, but in traditional datacenters these are a good extra safety factor.

Dynamic Pricing

We've made enhancements to that. Pricing pressure is now being applied by individual components and capacities as well.
Fine tuning for more stable, yet allowing faster flings in period of high demand. It also collects and displays some extra pricing data for admin observations, this data allows us to better choose what kind of nodes to build.

Dual NVMe Models

Dual NVMe models were introduced. These are important for those looking a little bit of added redundancy or added performance.

Power Meter Data

This has been enlightening and showed some future potential development paths to go even further with efficiency, there's a low hanging fruit of few % additional efficiency to be gained here. Every % at these scales become important, just 1% better efficiency can mean additional 100 servers.

We estimate there could be potentially as much as 8% additional efficiency to be gained here working on more efficient power delivery. Experiments are being planned, but development will take time -- the planned development path is rack scale changes. Ironically, this is because our system is already so efficient, which leads to inefficiency in power delivery. This wouldn't have been easy to notice without the power meters.

We also need more data before ensuing in that development path.

Better Networking, Density and Cost improvements via Networking

We started exploring one of the development paths for networking, and found out we can both increase density, lower TCO significantly directly and indirectly. We've done some mockups, and checking out vendors. We expect first units to enter production during summer.

This path also opens the door for 2.5Gbps units.

Testing units for 6x2.5" in-hand; Looks like a good match

We have some testing units for 6x2.5" setups in hand, and at quick mockup seems to be rather trivial integration. We might have some units in production by end of Q3/24.

Automation, control panel and distros

Internally we have a level of automation already working, but it needs a lot of polish and only works for single drive setups so far. It's mostly just software and hours upon hours to be spent in testing and qualification. The good news is that all it seems to require is just hard work and we can start rolling out the capabilities.

It also completely changes the processes of upbringing nodes into production, which causes it's own issues, delays.

If you want to beta test and have single drive model; Contact support. There will be caveats and will require you to have prepaid your server for at least 6 months due to the manual work entailed. For now for single unit nodes we can change distro for you (completely unoptimized!), but later (few weeks in future) on we can start providing the self service control panel as well. We'll only accept a few beta testers, and feedback is to be expected.

Standard Rollout will start by moving a certain single MD model at a time to the automation system.

Tooling

Tooling updates and enhancements found again, making things faster and easier to build. Manufacturing is difficult, and we expect finding ways to be more efficient to just continue the more we build these. On small scale you'd never find these efficiencies, neither would probably be worth the effort.

Every tooling update and enhancement has it's own ROI, in terms of money and time spent.

 

]]>
<![CDATA[VM Seedbox Performance and Reliability: Comprehensive Stability and Speed Enhancements Achieved]]> https://pulsedmedia.com/clients/index.php/announcements/636 https://pulsedmedia.com/clients/index.php/announcements/636 Thu, 07 Mar 2024 09:27:00 +0000 Pulsed Media VM Seedbox Performance And Stability Update

Revolutionized VM Seedbox Performance: Comprehensive Stability and Speed Enhancements Achieved

We are excited to announce a major breakthrough in our new VM seedbox services. After extensive testing, we've successfully eliminated all instances of crashes. This achievement marks a significant improvement in the stability and performance of our VM-based seedboxes. Initially targeting the most problematic nodes, we've implemented systematic changes across all hosts, enhancing stability under high I/O loads. Our automated script streamlines this process, ensuring all participating guests benefit without the need for manual intervention.

Detailed Issue Analysis and Effective Solutions

The root of the stability issues traced back to an old problem with pre-emptive kernels interrupting I/O processes, exacerbated by default Proxmox kernel settings not optimized for heavy I/O loads. This problem became apparent only under extreme conditions, involving high I/O demands across multiple VMs, compounded by shared SSD caching. With targeted I/O setting adjustments, we've eradicated these stability issues.

Some I/O related setting changes later; These issues are gone.

Performance Positively Impacted

Subsequent performance analysis revealed opportunities for further enhancements. Preliminary tests on select nodes have shown promising results, with some instances experiencing up to 3x I/O performance improvements. While these gains vary, they represent a significant step forward in our ongoing quest for excellence in Seedbox services.

Buried in statistics, however, some other related host I/O settings has also been changed which have shown some performance improvement. When these were implemented in a hurry during the worst of energy price crisis schedulers and read aheads were heavily stacked on top of each other. Each scheduler adds latency. These has been fixed now on host level, and slowly rolling out to the guest level too.

Understanding the Initial Oversight

Our initial reliance on synthetic testing, while thorough, failed to replicate the unpredictable nature of real-world usage. Despite positive aggregate performance data, the nuanced complexities of simultaneous VM operations and diverse request patterns eluded our tests. Synthetic testing can only do so much. Real world seedbox usage is chaotic, very chaotic. The natural chaos of real world was missing, ie. fluctuating queue depths, request sizes, all guests heavily active etc etc.

Further, upon inspecting total statistics, everything looked not just fine, but great. Total absolute performance increased by a statistically significant margin, so much infact that it was obvious to human eyes from plain bandwidth utilization graphs. So aggregate data showed performance improvement.

Instability was impossible to test for, it's like a loose electrical connection in your car; Constantly annoying and causing you issues, but you don't know what it is. The moment you take it to a garage, the issue goes away. It was impossible to reproduce, and there was absolutely no data what-so-ever. All you had was instinct/gut-feeling to go with for diagnosis. Therefore, it takes months and months of time to hunt something like this down.

We're now more equipped than ever to deliver an unparalleled VM seedbox experience, characterized by unwavering stability and enhanced performance. This milestone is a testament to our commitment to continuous improvement and customer satisfaction.

 

 

]]>
<![CDATA[Support Email under attack, at least 660 000 pending emails. Read if you tried to open/reply to ticket in the past couple days (RESOLVED)]]> https://pulsedmedia.com/clients/index.php/announcements/635 https://pulsedmedia.com/clients/index.php/announcements/635 Wed, 21 Feb 2024 08:59:00 +0000 These are bounces and mostly originate from 2 domains, It's a bounce attack with email servers which do not check SPF or DKIM at all, and sends bounces regardless if origin is acceptable.
All of them are "Undelivered Mail Returned to Sender" or similar, from select few domains.
We have no customers in those domains, therefore have not sent email to them before.

This took multiple days to notice because no errors were not generated, some tickets still got imported etc. It was only noticed after longer time completely no tickets imported and error message was generated that import had not finished fully in a while.

We are still investigating and working on this, it's rather slow because loading a inbox of that size is ... Well slow.

If you opened a support ticket via e-mail in the past few days, please login to client portal and ensure you have that ticket or that you got a reply of ticket being opened.

 

UPDATE: RESOLVED

If you tried to open ticket OR reply to a ticket by emailing support, please ensure you actually got the ticket opening. No email client could function with the inbox anymore so it had to be 100% cleared sadly.
Some still got imported, some did not get imported. It was impossible to tell if all legit replies / ticket opens were imported.

 

]]>
<![CDATA[Accidental emailing to _everyone_, even past customers. We Are Sorry!]]> https://pulsedmedia.com/clients/index.php/announcements/634 https://pulsedmedia.com/clients/index.php/announcements/634 Sat, 17 Feb 2024 17:52:00 +0000 We were sending an email to users of particular server earlier today, it needs some human attention.

This email was chosen to be sent to that specific user group.

We use a 3rd party billing system, probably the most common one among hosting companies. We recently updated from a very old version to the newest version. Turns out there's a plethora of new bugs // usability gotchas on old features; This time it was sending an e-mail meant for couple dozen users to everyone.


So this defaulted to just send everyone and anybody the email, which was meant for specific product, only active services, specific domain -> Send it to everyone.
You have to go look for it to see the email sending progress, and to how many. Future we will recheck the recipient numbers on each emailing. Slows significantly down every day work or less emails sent to users of specific servers as a group. We will see.

 

We are really sorry for having pestered so many users with thing that does not concern them the least.

]]>
<![CDATA[New MD Series models; Including dual NVMe and i5-6500t/4TB NVMe model]]> https://pulsedmedia.com/clients/index.php/announcements/633 https://pulsedmedia.com/clients/index.php/announcements/633 Sat, 17 Feb 2024 17:43:00 +0000 New MD series models are online, including 3 different models with Dual NVMe options.

2xNVMe models by default are RAID1 but can be changed to RAID0.
All of them have i5-8500t CPU, 32 to 64GB of RAM and NVMe drives are 2x1TB or 2x2TB for now.

We are now at 24 different models.

Check the new models out at https://pulsedmedia.com/minidedi-dedicated-servers-finland.php

 

]]>
<![CDATA[MD Platform Development Update 01/2024]]> https://pulsedmedia.com/clients/index.php/announcements/632 https://pulsedmedia.com/clients/index.php/announcements/632 Wed, 31 Jan 2024 03:45:00 +0000 MD Platform Development Update 01/2024

We've again made some progress! Below a photo on the latest prototype reaching a production rack.
This is hardware rich development, iterate iterate and iterate in-production, each time making something a little bit better. There's an unbelievable number of little nuances in a system like this.

Latest MD Prototype in production rack

This just entered production last week, yet isn't even our latest model. That's still on the "healing" bench, waiting for some parts. We got the wrong parts accidentally in stock.

Female doesn't fit into another Female fitting

 

New Challenges Await

One of the models refuses to network boot without display present. Easy enough fix, just took a while to get the "dummy plugs".

Some models which have Realtek NIC has random seeming network performance issues with some targets, upload performance on average limited to about 60% what it should be. Easy enough, just need addon NICs when the onboard one is not usable. It seems a bit random which unit has which. Mounts still need to be designed and made before moving into production testing.

Bios Woes, once again. Bios configs for these can be very nuanced. Issues will be fixed once they arise, no sense to take hundreds of nodes out of the rack each time we find something which should be better or fixed, unless that issue rises for the particular node.

Approximately 30% of the MD models produced to date are currently down due to various little nuanced issues, some are having actual hardware issues tho; Namely we received a really bad batch of RAM and NVMe drives, all of these were brand new but with sky high failure rates. Most are config errors and such however, small human errors. Plan is to fix these during February.

We planned to use PC-ABS from PM for electronics enclosures etc. but unless you make solid part this material is pretty much unprintable, it cannot be bridged at all even within 65C chamber. Either you have bridging with no layer adhesion, or you have no bridging with layer adhesion. The sweet spot is way too narrow to make parts which are not mostly solid. We are probably being over cautious with material choices however. Since these do not need high continuous loads or high stiffness, we will be trying another material.

Burn In Testing; We need to automate this and make a process out of it. Some 7-8" displays are now on order to make many tiny burn in + bios config stations, something to work on after February. Does anyone know of a minikeyboard which simply has arrow keys + esc + F5 through F12 keys, and works universally? Let us know!

Progress Made Over The Past Couple Months

1) New units will have new power cabling, which is faster and therefore cheaper to produce AND more flexible with no specific model tie-in anymore. This power cabling takes couple minutes to assemble per unit, instead of the old one averaging to very laborous 20minutes or so per node. Old power cabling cost was about 3.50€ per node, new one costs about 5€ per node BUT saves about 15minutes of labor while providing much neater and nicer end product. This is a huge win!

2) Power Meter + RJ45 panel and also is better mounted now, earlier affixation was a little bit too weak (photo above)

3) Dual NVMe Drive Models will shortly be available

4) Bios updates (fixes some non-bootable units)

5) Automation is starting to get a long, some further tests has been made and first unit installed through new system done. Working on API for power management.

6) Some parts removed from the platform which were deemed unnecessary and potentially even detrimental to final product.

7) TOOLING! Can't stress enough about tooling. New tooling is pretty much operational and in use now. It makes some annoying portions now outright satisfactory.

8) New rack mounts are in stress testing right now. Just proof of concept, but we want to first see how well they hold weight before committing more resources into design.

9) MD Product Page; The little chart next to price shows if the price is going up or down, but if you hover over it you can see last 30D and 7D average for sale prices too.

10) Dynamic pricing is working excellent, EXACTLY as we hoped and planned for.

11) 19th model was just added, with at least 3 new models inbound during February if there are no surprises.

Tooling in action examples

When the marketplace doesn't have what you need? You make it. This time a HVAC ducting adapter which is eccentric and conical, this exact size was not immediately available and conical eccentric sometimes go for as much as 500€ a piece (seriously!) when they should be 50€ a piece. This cost 20€ in materials, 30minutes in design, less than 1 day in print time each. Immediate need was for 2, but later on more as fans tend to die eventually. Especially 315mm sized ducted fans, they have very weak bearings to their size, no matter of manufacturer. Couple of years per fan. Our first 315mm sized ones cost nearly 1000€ a pop, but now we adopted older tech and they are more like 250€ a piece. They last about 2 years each. The bearings can be replaced however if you have hydraulic press, and spend the time to arduously open the stamped shut casing etc. This job is better left for someone who specializes in HVAC fans tho, so we've just been accumulating failed fans over the years.

Tooling operational

It is now quick and easy to make all kinds of custom tooling as well to make production faster. Here we see an very weird network cable organizer. Assembling an MD set takes multiple lengths, so varying width slots required.

Example of new tooling making new custom tooling

Jigs and such has saved tremendous amount of work too.

We've got an 800x800x1000mm printer en route to make HVAC intake/exhaust grilles too. While those are very basic in their construction, the large sizes tend to be exorbitantly high price. We calculated a minimum of 8000€ just for a few grilles if bought from the market. CAD Design takes less than 1hour, materials for all those will probably be around the 150-200€ mark, and printing time in the 1 week range. Another ~7000-7500€ saved in costs with just a little bit of effort.

Material woes? PLA is UV safe with just a shallow paint coat, that stops UV rays. Also even if you don't paint, only the surface layer is affected, UV cannot penetrate the whole material. Temperature range is more than sufficient, intake never gets to  65C, and on exhaust side even with our 8M high DC and taking from the ceiling, we won't probably see 65C exceeded, even tho we are hoping the heat stacks up nicely. What if 65C is exceeded? The PLA anneals, and then it can take 150C, however shape will change slightly as the material crystallizes. Surface annealing is both easy and preferrable in post production, this takes mere minutes and is much simpler than most expect; Just take a blowtorch on the darned thing! ;D The extreme temperature (~1300C) not only eliminates the "whiskers" but actually heats up the very surface to beyond 65C causing some crystallization of the polymer structures, therefore surface annealing the part. Once the surface cools down, it can withstand 150C, and once temperatures reach over 65C the surface can keep the internal structure shape better, therefore more thorough annealing without loose of dimensional accuracy / material shrinkage; At least not as easily. Just don't do this at home! It's very easy, not to just damage the part, but in inexperienced hands you might burn your house down. So please don't do what we do, be safe. Use heatgun or more traditional methods. We work in industrial workshop, with multiple fire extinguishers nearby. Due to extreme temperatures etc. it also needs experienced touch not to overheat sections. It's an acquired skill through multitude of failures.

Now, how about PLA being biodegradeable, doesn't it degrade being an outside vent? No. PLA for fast biodegradation requires high temp industrial composting. Namely, exceeding 65C and preferrably anaerobic. Otherwise it takes a really really long time to degrade. It does, but does it matter if it takes 150 years to do so? Not really. Also since these are on open air, not wet, the typical biodegradation processes don't occur.

Dynamic Pricing In Action

Dynamic pricing pressures

Pricing pressure above right now. First line is the global adjuster, amongst all MD units, the totality of them. Each line below is individual model.

Some models are in high demand they go up, some models in low demand so they go down.

This let's You Decide what the server is worth to You. We are really poor judging what our offers are worth to you, we don't know all the use cases, or how things solve your problem and what it is worth to you, or how the landscape of offers have changed, how do you value our unique take on servers, or our network, what our production capability for new units is etc etc. There are simply too many variables at play to anything else than guesstimate the ballpark. So let an algorithm decide.

The factors currently considered are total number of units sold/unsold, when was last unit sold, what's the average 30day offer price, average price sold at etc.
We will probably add more variables as time goes on, but for now this works really well. We need to finetune the algo now and then, for example we've been out of 4TB models for a long while now, once we got even 1 we decided to manually nudge the price higher, but the algo lowered it back down way too fast via the "Draw to the mean" portion. We had to lower it's power level to allow more flexibility in pricing, and still this resulted in too low price compared to demand :)

There was also a bug which made new model price 0€ because there was no price history :)

 

Why Power Meters on every set?

Curiosity! First and foremost. But in reality? This is valuable technician tool, you can at quick glance see if there's something obviously wrong, and during final setup the power consumption shows if all nodes booted up right. These show a lot of data, including temperature and total kWh consumed in the meter's lifetime, power factor, voltage etc.

Further, we get manual verification of a sets average power draw over time. This helps us more precisely calculate the Operating Costs (OpEx) of each model of units, which allows us to set lower starting pricing eventually! Once we reach equilibrium of producing more servers than we sell, having precise OpEx data is crucial to set the lowest possible starting price for a new model.

A lot of operators just goes like "350W PSU == 350W Consumption" or "180W TDP CPU + 4x3.5" == ~220W consumption" -- neither is even remotely correct, not anywhere near.

For example, we just noticed a Ryzen 3900X with 6x3.5", 4x DIMM Modules only consuming ~110W in production despite 105W TDP CPU and 6x3.5" -- Meanwhile a 65W TDP Xeon with 4x 3.5" consumes 180W !! Both are 1RU.

We would prefer to collect this data automatically into a database, but the power meters while they have both MCU & serial interface on the MCU -> This serial interface is not exposed. Maybe we will spend the time to see if the serial interface outputs data, but we don't expect manufacturer to disclose any information nor be willing to make custom firmware. We asked for customized version where this port is more easily exposed (other than less than 1mm diameter pads for pogo pins on the PCB) and the MOQ was 10 000 units.

Besides, someone will eventually make that version regardless, it's way too obvious of an value add for so little cost! Only a little bit of design time + extra port / pins on the board.

Also these power meters we got for a really fair price at quantities. Adding one does add a little bit of assembly time, and does take a little bit of power (tiny fraction of 1W, the screen is BRIGHT), and does cost a little bit of money and space on the platform, we think the tradeoff is more than worth it. All it takes is a few times saving technician time on troubleshooting.

Plus it looks cool! ;)

]]>
<![CDATA[Inflation and ever increasing costs; On everything. Year 2024 edition (Is this Groundhog Day?)]]> https://pulsedmedia.com/clients/index.php/announcements/631 https://pulsedmedia.com/clients/index.php/announcements/631 Tue, 23 Jan 2024 08:19:00 +0000 Today marks yet another day of getting notice for immediate price increase. This time on average of 4% on already relatively (for what it is, the value it brings) extremely expensive service. The total cost is "peanuts" or "rounding error"; but this service has very strong vendor lock-in, and the price scale small enough so they can do this every and each year, it costs much more to move to different service provider than many years of the increased cost. At this point it has more than 10x'd the cost in the past decade. At this rate, it will be 100x of the original cost by 2035. (No, this is not the piece of software everyone is thinking right now, despite known for this, but something local and only Finland specific)

This is constant, and ever lasting. This month it's been many more or less significant costs being increased.
Some wrap it as "Good News! We only raised price by 4.5% since that's the official inflation figure!". 4.5% annual price increase = 55.296% over 10 years. That is not small.

Electrical prices continue to be high as well, the 2023 annual average electrical price was somewhere around 300% of what it was previously.
YES, there were a lot of days with even negative spot pricing; But this does not come all the way to us, we do not own our DC building nor the transformers there in, we merely lease the space and we are at the mercy of the buying skill of the real estate corporation (bank) which owns the actual building. They seem to be abysmally bad at this, when the prices started skyrocketing turns out they had no upfront plans, they had no protections in place, and were solely on spot pricing 100% -- Then they made arrangements to protect 40% when the prices peaked at their highest, for not sure how long.
Not only that, this corporation is also bent on "Saving The Planet Earth", only rent increases come in paper for example "because of Co2!" (I would argue regular print paper might be net negative Co2. Is Co2 actually an issue? Another topic for another date!); They spend more effort on recycling programs for the building than actually maintaining the building, well it seems like that

So what they do when prices are the highest? Move to "100% Green Energy!" of course!

Net result is that our electrical price lowest month was still ~+100% of what it was previously, typical month is more like +200% with much higher peaks. No, This is NOT A JOKE.
This despite the total annual average pool spot (energy exchange) pricing was not that insane last year.

Despite this, we still have the situation better than many others. Some have made 100% lock-in price for years to come at prices 4 to 8x what they normally are. Imagine those people and businesses who locked in for 50-60cnt/kWh PLUS Grid Fees and Taxes! There are large office buildings right now at that situation, no one wants to rent or buy from those buildings, and companies are moving out of the building because management did something that silly as to lock in the highest price in decades.

At this point the only cost that has not increased is transit/bandwidth costs, and select hardware components.
Even weirder; We see that HDD pricing has stagnated to same level as 2020 roughly, new models get released, bigger ones, but per TB cost remains the same year after year. It's like the price was fixed, remains within few percentage Year Over Year!

 

What Does This Mean To You As Customer?

Not much at this time, we have no plans for immediate price increases. New service order prices has been increased, and much to our surprise that has not affected new service sales much.

However, we need to set much larger growth goals in revenue growth to retain the same profit margins, AKA healthy business. It does not help that Seedboxes has been a dying niche for a decade now, and to this, we are now expecting a pivotal moment this year with the MD Platform; Projections show that if successful; This year marks the point we have pivoted mainly as dedicated server provider instead of seedbox provider.

Our current revenue is approximately 23.50% dedicated servers, out of which MD beta testing and development units already accounts for 40%. Projection shows that we will increase quantity of MD nodes by 5x by end of the year at this pace. There's been a small hiatus on new nodes as we've worked on tooling, testing, and fixing issues. All of that background work enables us to work much more efficiently in the future, paying back dividends for decades to come. We are now at a point bringing up a new MD node takes significantly less than 1 human hour of labor, and we are now starting to analyze operational effort required (repairs, replacements, fixes) soon as the platform is starting to near feature completeness.

The pivotal moment is when from our revenue is 50.01% or more in dedicated servers. We expect to reach this point by end of this year. It might take until the very last day of the year, but we are expecting to reach. We shall see how that goes.

]]>
<![CDATA[Paypal Integration Update; Credit/Debit Card, Express Checkout added, making new subscriptions is back]]> https://pulsedmedia.com/clients/index.php/announcements/630 https://pulsedmedia.com/clients/index.php/announcements/630 Sat, 13 Jan 2024 19:42:00 +0000 Paypal integration has been updated!

Subscriptions are back! You can now make new subscriptions for pre-existing services as well.

This has direct Credit/Debit checkout option as well. You can pay with card without signing up to paypal.

This should streamline your overall payment experience when using Paypal.

]]>
<![CDATA[Major Main/Billing Site Maintenance; Most likely today. Potential downtime several hours [COMPLETED]]]> https://pulsedmedia.com/clients/index.php/announcements/629 https://pulsedmedia.com/clients/index.php/announcements/629 Sat, 13 Jan 2024 14:25:00 +0000 This is a major maintenance
We are working to update all the backend systems.

New infrastructure is now up, all code revisions has been tested, and we are now testing the migration procedures.

In case this is not completed today, this will be completed early in the upcoming week.

 
How will i know when it's going on?
Current site's dynamic portions will be put into "maintenance mode"; Meaning no ticketing, no billing access, no data updates etc. as the migration starts.
Static portions will be served normally.

DNS needs to be updated, and some providers doesn't want to follow the rules & standards; That might take for those unfortunate people 48-72hrs to update. This is mostly an legacy issue, remnants from 90s and we are not sure if there are any ISPs running that old DNS servers anymore.
It's also possible for browsers to cache DNS requests, so if this gets prolonged, try different browser / restarting your browser. CTRL+F5 (Force Refresh) / Deleting Cache sometimes helps too, some browsers are adamant to cache and show stale pages sometimes (despite being told not to!).

Once done, you will notice some billing portal updates.


Why?
There are a lot of bugs in current billing system, and the base systems in the backend are now really old. January 2015 old. So a server update every 9 years is in order indeed.
This will enable us to push out new features in rapid succession, and for example curb on the helpdesk spam (quite literal and bad) which has been slowing support down for a while now.

This will also allow seamless transitions to newer hardware, or more performance shall we need it.

These updates are quite necessary for continuous improvement of our services.


How long does this take?

For those of who have modern DNS nameservers; We expect the downtime to be around 60minutes. This is the time it takes for all the backend processing, database migration, updates etc.
Our database is fairly large so it does take quite a bit of time to process the migration.

There's a lot of things to recheck, especially since many of the settings are stored in the same database we are migrating and updating; We cannot do them before hand as the changes would be lost on the migration.

Future updates should be much smoother, we rebuilt the infrastructure partially to allow for more swift and easier updates without downtimes for years to come.


Feedback is requested

We would like to hear if you had any issues, or have any feedback of the overall process or the updates.
If you notice any glitches or bugs, there will be rewards for quickly letting us know so we can fix those.

So anything at all, feel free to send us email, open a ticket, tell us in discord, DM on Twitter/X etc.
Ticketing will be back online quickly after migration, and emails sent to helpdesk/ticket system should be imported once we are back online.

]]>
<![CDATA[E-Mail deliverability fix / update; Silent dropping by Gmail, Microsoft]]> https://pulsedmedia.com/clients/index.php/announcements/628 https://pulsedmedia.com/clients/index.php/announcements/628 Sat, 13 Jan 2024 13:56:00 +0000
Again, one of our outgoing mail servers has been blocked. It was one of four, causing on average 25% of e-mail being dropped.

Further; while our email does score beyond 10/10 by independent tests, we believe our email has always been punished as we are on some corporate firewalls too ever since a university thesis paper featured us, roughly a decade ago.

Regardless; Deliverability has been improved now.

Do remember that you can always check your full e-mail history in the billing portal, under your account.
We would like to hear if you still have issues.]]>
<![CDATA[Upcoming main site maintenance]]> https://pulsedmedia.com/clients/index.php/announcements/627 https://pulsedmedia.com/clients/index.php/announcements/627 Thu, 11 Jan 2024 17:04:00 +0000
That will necessitate some limited downtime, estimated to be 45minutes or less.

We have no set schedule other than that this will happen during this month.
We will make a new announcement once we know the exact time.]]>
<![CDATA[Network Maintenance at Helsinki DC]]> https://pulsedmedia.com/clients/index.php/announcements/626 https://pulsedmedia.com/clients/index.php/announcements/626 Sat, 30 Dec 2023 16:16:00 +0000
Hopefully only isolated to small number of units.

]]>
<![CDATA[Seedbox Quick patches: Root FS Inodes Full/Exim4 snug back in ++ Certbot not renewing SSL Certs]]> https://pulsedmedia.com/clients/index.php/announcements/625 https://pulsedmedia.com/clients/index.php/announcements/625 Sat, 30 Dec 2023 13:40:00 +0000
We will later implement package pinning and checks on the mainline PMSS to be ran on the local server by itself.

Issue #1:
Certbot randomly fails to renew certs. the cronjob is a bit meh, so obvious permanent fix is to replace that.
For now, our backend simply goes and runs renewal separately.
 
Issue #2:
Exim4 keeps getting snug back into the system with some other maintenance related packages (mdadm, smartmontools).
Package is not installed, yet exim4 resides there.

To make matters worse, these are unconfigured and cannot actually send email, some things can create an email address to a user on the local system.
This causes a loop of error message emails.
Eventually after many many months or years, this will fill up all the inodes on root filesystem. Not the storage capacity, just the inodes which are required 1 per file minimum.

A quick patch to clear these messages out and stop exim4 via backend automation was implemented.


]]>
<![CDATA[Website Issue Fixed: SSL Cert (https)]]> https://pulsedmedia.com/clients/index.php/announcements/624 https://pulsedmedia.com/clients/index.php/announcements/624 Wed, 20 Dec 2023 08:59:00 +0000 Infact, annually you have to generate all new certificates, configure manually the new certificates etc.

No automation present at all.

So an human error happened and the cert was let to expire.

We are going to take steps to ensure this doesn't happen again.

Sorry for the inconvenience.]]>
<![CDATA[Next gen HW MD Platform progress, new DC progress]]> https://pulsedmedia.com/clients/index.php/announcements/623 https://pulsedmedia.com/clients/index.php/announcements/623 Sun, 17 Dec 2023 10:19:00 +0000
MD Platform HW development progress


The next gen platform with tidier, more precise and streamlined build process is almost finished.

We just upgraded our tooling heavily, and effort for the past month has gone solely for tooling.

The big enhancements we are currently working on
 - Fire-retardant, UL94 V-0 certified electronics enclosures and brackets for various pieces
 - New wiring loom which is more flexible, faster and therefore lower cost to produce. Current estimates for the time saved per set is ~1hr (10-15% of total time)
 - Constant power metering per platform, allows at a glance check of unit power consumption and lifetime energy usage.

We are starting to bring these features into production, there still will be some in the middle point with not all of these features installed probably, and we have inventory to clear out as well.
Some are waiting for additional parts deliveries to bring into full production, and are in proof of concept stage right now.

Smaller scale more granular power metering will help us better estimate the real world power demands in production. Estimates and bench testing can only do so much, and measuring full racks (or individual phases) still doesn't have the resolution required. We are currently seeing as high as 100% variance in power consumption between platforms. ie. one consumes ~double compared to other, with similar spec. Way more data needs to be collected. Oh, and those power meters also look cool. ;)

Once we got production set for all these features we will most likely spend some effort to upgrade the Gen0 and Gen1 platforms to this new platform. Those 2 were really proof of concept level and are wasting a lot of space, and are not as power efficient.




New Datacenter Progress Update

Work has continued, design has been finished close by and it's now all matter of execution.
This has proven more expensive and difficult project than anticipated, but we also gained a lot of invaluable experience for future project(s).

This has been designed for maximum airflow and density from start to finish. It is 8meters high space, with plenty of airflow potential.
Some of the waste heat energy will be used to heat rest of the building.

The current floor plan allows up to 9408 MD nodes to be housed there.
The design is mostly replicable for other spaces of roughly similar dimensions.
Phase 1 is targeting up to 2000 MD nodes, after which comes Phase 2 which calls for transformer upgrade and other electrical infrastructure installtions.

]]>
<![CDATA[Important Notice: Support Response Delays Due to Black Friday Rush]]> https://pulsedmedia.com/clients/index.php/announcements/622 https://pulsedmedia.com/clients/index.php/announcements/622 Fri, 24 Nov 2023 19:42:00 +0000
We want to assure you that our team is working tirelessly to address every query. Your concerns are important to us, and we're committed to providing the quality support you expect. We kindly ask for your patience during this time, as responses may be delayed.

Rest assured, all tickets will be addressed as promptly as possible. We appreciate your understanding and are grateful for your continued support.

Warm regards,
The Pulsed Media Support Team]]>
<![CDATA[⚡️ Breaking News: Electrical Maintenance Postponed Again... 🔄]]> https://pulsedmedia.com/clients/index.php/announcements/621 https://pulsedmedia.com/clients/index.php/announcements/621 Wed, 22 Nov 2023 11:43:00 +0000 In an unexpected twist of fate, just as we were gearing up for the "Great Electrical Maintenance Showdown" at our Helsinki Datacenter, the universe decided to throw us another curveball. So, brace yourselves: the much-anticipated electrical maintenance has been postponed yet again, this time to an undetermined future date (again 🔄).



The Saga Continues
Just when we thought our maintenance story was coming to an epic conclusion, it turns out the plot is thicker than Finnish rye bread. It seems the forces that be have decided we need a bit more suspense in our lives. And who are we to argue with destiny?
In the latest episode of 'Maintenance Chronicles,' it's not the missing gear this time but a water pumping station snag that's keeping us on our toes. Who needs TV dramas when you have real-life data center maintenance sagas? Contractor had not realized it needs to be powered on at all and not preparated for that; Otherwise the building might flood during the maintenance period.

Our Apologies
We understand this rollercoaster of maintenance dates has been a bit like trying to catch a greased-up server in a server lab. It's been tough for us too, not knowing when the big switch-off will happen. Please accept our sincerest apologies for the constant edge-of-the-seat experience.

Black Friday Silver Lining
But hey, every cloud has a silver lining, right? Our team, ever-ready for data center marathons, will now pivot to crafting mind-blowing Black Friday deals. Think of it as our 'plan B' turned into 'plan Awesome'. With this postponement, we can finally shift our focus to the exciting world of Black Friday deals! Yes, we've been holding off on those juicy specials, fearing that our team might be too busy dealing with post-maintenance gremlins. But now, it's all systems go!

What's Next?
While we don't have a new date for the maintenance yet, we promise to keep you updated with all the details as soon as we have them. Until then, let's enjoy the uninterrupted uptime and get ready for some amazing Black Friday treats.

Our servers, currently enjoying their uninterrupted dreams, are blissfully unaware of the maintenance drama. Let's keep their slumber peaceful a little longer, shall we?




As we navigate these maintenance twists and turns, we're more committed than ever to ensuring your digital experience is as smooth as a freshly debugged code.  We're all in this together, and we're committed to keeping your data flowing smoothly, come rain, shine, or maintenance delays. Thanks for being part of our journey and for your understanding as we tackle through this unpredictable adventure..

Got questions or need a digital shoulder to lean on? Our support team is charged up and ready to jump into action faster than a server reboot.

Energetically Yours,
 Aleksi
 Chief Energy Officer, Pulsed Media Team

]]>
<![CDATA[Electrical ⚡ Maintenance Rescheduled: 22nd of November. Yes, JUST before Black Friday]]> https://pulsedmedia.com/clients/index.php/announcements/620 https://pulsedmedia.com/clients/index.php/announcements/620 Sat, 28 Oct 2023 17:26:00 +0000 Let's hope we can finally but this behind us and they can complete the building electrical renovations.

Our staff will be on-site burning not just the midnight oil but it's going to be an all-nighter.

Some previous announcements:
https://pulsedmedia.com/clients/announcements.php?id=618
https://pulsedmedia.com/clients/announcements.php?id=617]]>
<![CDATA[MD Series Development Update. 50%+ Remote Reboots now possible]]> https://pulsedmedia.com/clients/index.php/announcements/619 https://pulsedmedia.com/clients/index.php/announcements/619 Tue, 10 Oct 2023 15:33:00 +0000 50%+ of nodes can now be remote rebooted freely.
Documentation and software work on that is on-going, but the hardware is now fully in place and scaleable.

To fully document this, most of the nodes needs to be hard rebooted once. Approximately 12.5% of total. This will happen gradually over time, and we should only need to do it once.

Older platforms will be upgraded by end of Q1/2024 to the platform with hardware remote reboot capabilities, they are missing that and they require a full platform physical upgrade to do that. Same servers, just the infrastructure hardware around them needs to be replaced.

---

In other news, we received a batch of nodes capable of supporting 2x NVMe drives. We will try to get some of these online by end of November. We received almost 100 new nodes last week and work is starting to get them online.

We are currently also researching NVMe drive cooling options, and have some new parts in order which should allow us to install cooling for all NVMe drives in fraction of the price and cost as before. Prior versions it took quite a bit of time to install the cooler, and the coolers were rather expensive. Next version cooler install is almost neglible time, and almost neglible cost. Please let us know if you are monitoring your NVMe drive temperatures, or if you have seen temperature warnings. Same goes for CPU temps.
That being said, we are yet to see a single case of throttling or beyond warning level temps. There is a NVMe drive model which has over 10C lower warning temperature threshold than others and we've had 1 report of warning threshold being reached. We are going to upgrade those first.

Most amazingly, we are expecting to more than double the total number of MD series nodes by end of the year. Yes, more than double.

Finally on hardware side of things, we are almost over with version 2 platforms, now at version "2+" (manually doing some changes) and shortly all new nodes should be on platform version 3 which simplifies manufacturing these quite a bit, estimated time savings per node is significant. We are currently doing manual fabrication work to upgrade version 2 to "version 2+", drilling, tapping etc. but the next batch to be ordered will have the fabrication shop robots do these steps, along other upgrades in thermal management.

---

Software has been roadmapped quite decently, from end to end. Progress has started on that, but we think visible changes to end user will start happening around December. Our primary focus is still on the backend, making sure we can efficiently manage these with least possible time consumption. Less human effort involved means lower price for Your Server.

There is a rescue mode already if you need it, and you can install any distro via rescue mode.


]]>
<![CDATA[Update on Electrical Maintenance at Helsinki DC - Cancelled & Rescheduled Due To Equipment Supplier 📣]]> https://pulsedmedia.com/clients/index.php/announcements/618 https://pulsedmedia.com/clients/index.php/announcements/618 Tue, 03 Oct 2023 14:42:00 +0000 Hope you're all doing great! 🌟 We've got some important news to share about the electrical maintenance that was scheduled for our Helsinki Data Center.

🚫 Breaking News: Maintenance Cancelled 🚫

Okay, so here's the tea ☕: The maintenance is officially cancelled. Why, you ask? Well, equipment supplier kinda dropped the ball and didn't deliver some of the crucial gear that was needed. Bummer, right?

What's Next? 🤔

We're still figuring out when we can get this back on track. But don't worry, we'll keep you posted with all the details as soon as we have them.

We know this is super frustrating, and we're really sorry for any inconvenience this might have caused you. Trust us, we're not thrilled about it either. 😤

Thanks for being awesome and for your understanding. You rock! 🤘]]>
<![CDATA[Helsinki DC: Power maintenance THIS Wednesday [CANCELLED/RESCHEDULED]]]> https://pulsedmedia.com/clients/index.php/announcements/617 https://pulsedmedia.com/clients/index.php/announcements/617 Mon, 02 Oct 2023 12:26:00 +0000
Generator is on-site, but we do have to reboot almost all servers regardless, and there might be servers which will not be brought online during this.

Please refrain from opening tickets until you see announcement of maintenance being over.


UPDATE: Cancelled/Rescheduled, read more at https://pulsedmedia.com/clients/announcements.php?id=618

]]>
<![CDATA[Helsinki DC outage, investigation underway. [RESOLVED]]]> https://pulsedmedia.com/clients/index.php/announcements/616 https://pulsedmedia.com/clients/index.php/announcements/616 Sat, 23 Sept 2023 09:44:00 +0000 https://pulsedmedia.com/clients/serverstatus.php
As well as here.

Intervention is underway and staff going on-site.

UPDATE 1: Electrical grid issues confirmed, but everything should've recovered automatically if even went down. Staff moving on site.

RESOLUTION: We really appreciate your patience and understanding as we worked diligently to address the recent outage at our Helsinki Datacenter. We can confirm that the root cause was identified as a deviation in one of our power lines connected to the edge router. While this was an unusual and unforeseen issue, our team responded very promptly, ensuring a swift resolution. Please rest assured that all nodes are currently under close monitoring, and any remaining irregularities are being attended to with the highest priority.

We take this occurrence seriously and are committed to implementing additional measures to prevent such instances in the future. At Pulsed Media, we continuously strive to uphold the highest standards of service and reliability. We value the trust you place in us and are dedicated to ensuring the stability and integrity of our services.

For real-time updates and further details, please refer to our network status page: https://pulsedmedia.com/clients/serverstatus.php

We apologize for any inconvenience caused and thank you for your continued support and understanding. Our team is available for any further queries or concerns you may have.

]]>
<![CDATA[Update: Electrical Maintenance Delayed to 3rd-4th October Night]]> https://pulsedmedia.com/clients/index.php/announcements/615 https://pulsedmedia.com/clients/index.php/announcements/615 Sun, 03 Sept 2023 11:25:00 +0000
Just like before, some or all nodes may be down during this time. We'll still have a backup generator on-site, but depending on the load, we might not be able to run all servers off it. Servers operated from the backup generator will need two reboots as well.

We know changes can be a "shock," but we're doing our best to keep things smooth for you. If you have any questions, our support team is charged up and ready to help!

Stay connected,
The Pulsed Media Team

]]>
<![CDATA[Deluge fixed and update is being rolled out]]> https://pulsedmedia.com/clients/index.php/announcements/614 https://pulsedmedia.com/clients/index.php/announcements/614 Tue, 22 Aug 2023 22:44:00 +0000 https://pulsedmedia.com/clients/announcements.php?id=610

The update is being rolled out to servers on standard rolling updates manner.]]>
<![CDATA[⚡ Reminder: Electrical Maintenance on 5th September 2023]]> https://pulsedmedia.com/clients/index.php/announcements/613 https://pulsedmedia.com/clients/index.php/announcements/613 Mon, 14 Aug 2023 11:38:00 +0000
Date: Tuesday, 5th September 2023
Time: 23:00 to 03:00
Impact: Some or all nodes may be down during this time. We'll have a backup generator on-site, but depending on the load, we might not be able to run all servers off it. Servers operated from the backup generator will need two reboots as well.

We're doing our best to coordinate with the building manager to minimize any disruptions. But hey, sometimes you just have to "switch off" for a bit, right?

For more details, please check the original announcement.

If you have any questions, our support team is here, ready to "charge" to your aid!

Stay connected,
The Pulsed Media Team]]>
<![CDATA[Accessing 404: A simple regression, getting fixed shortly!]]> https://pulsedmedia.com/clients/index.php/announcements/612 https://pulsedmedia.com/clients/index.php/announcements/612 Fri, 11 Aug 2023 12:58:00 +0000
It was a silly regression on a update which has now been fixed.]]>
<![CDATA[Helsinki DC power outage [UPDATE 3 AND CONCLUSION]]]> https://pulsedmedia.com/clients/index.php/announcements/611 https://pulsedmedia.com/clients/index.php/announcements/611 Thu, 03 Aug 2023 13:40:00 +0000
We are checking that all systems are operational right now.

UPDATE: Most things started automatically, some backend stuff needed to be checked. Things are updating right now, but by the looks of it almost everything recovered automatically as expected.

UPDATE 2: Few nodes need manual attention one by one, we are going through these right now.

UPDATE 3: Last 5 nodes are being investigated why they did not reboot normally. Everything else is online. Only 1 drive failure and that was on a backend system so far. Everything else worked fine, mostly of peculiar firmware issues which required hard cycling tho.

UPDATE 4 - CONCLUSION:

There was a major electric grid outage in Helsinki today -- the whole grid went down. Consequently, datacenters, servers etc. here and there went down as well.

This caused outages for our Helsinki DC as well today. Majority of services autorecovered as expected. Few did not.
Hardware failures were zero, that one backend drive was already failed for awhile.

This was more arduous recovery than usual however, typically it has been less work. Even some brand new servers have this nasty firmware bug they don't fully recover from power outage, or voltage fluctuations.
Another segment was software issues, one VM host server had misconfigured arrays (and never had prior downtime, config error during initial setup, finished by 2nd error: never testing recovery), and some manual filesystem checks had to be done.

There might be some of our dedicated server customers who might still need help, but monitoring of those are beyond our purview and we have no means to determine until a ticket is opened. Many dedicated server customers block ICMP ECHO needlessly meaning we cannot monitor them at all. Please open a ticket if yours is still down.

We will be shortly responding to tickets still pending, now  that the actual outages has been resolved.

Sorry for the inconvenience.
As a reminder, there is also scheduled power & electrical maintenance this september: https://pulsedmedia.com/clients/announcements.php?id=599

Best Regards,
 Pulsed Media Staff]]>
<![CDATA[Deluge not working in some systems.]]> https://pulsedmedia.com/clients/index.php/announcements/610 https://pulsedmedia.com/clients/index.php/announcements/610 Thu, 27 Jul 2023 08:27:00 +0000 Deluge not working right now for you?

Could be because Python dependancies changed once again. We have potential fix for this, but requires sometime to test it etc. before starting to rollout.

With any Python application this is the reality, they keep breaking at random times, sometimes even daily. That's why updates has to be avoided at times.
There are no real packaging solutions neither which would be convenient to deploy and contain 100% of dependancies, 100% of the time. There are some (virtualenv and derivatives) but they too have their own set of issues.

Containers would be an bullet proof solution, but unfortunately that could drop some performance and for seedbox that is not very acceptable trade off for some.


ETA for fix is right now unknown.

Affected number of servers seems to be limited as well.

]]>
<![CDATA[Long standing stability issue resolved - New PMSS Release]]> https://pulsedmedia.com/clients/index.php/announcements/609 https://pulsedmedia.com/clients/index.php/announcements/609 Sat, 22 Jul 2023 12:11:00 +0000
We made a new release today with that.

Along with that is a fix for docker not always running properly (an environment variable needed to be added on .bashrc). Thanks to contributor for validating the resolution. He was given some service credit for his feedback and testing.


All of these software changes lately we expect that all servers should shortly be enjoying the stability & reliance we are best known for.]]>
<![CDATA[PM Seedbox Software New Release!]]> https://pulsedmedia.com/clients/index.php/announcements/608 https://pulsedmedia.com/clients/index.php/announcements/608 Fri, 21 Jul 2023 14:22:00 +0000 https://github.com/MagnaCapax/PMSS/releases/tag/2023-07-21

This patch mainly addresses some newly come up stability and reliability issues caused by Deluge, qBittorrent and Docker.
Some convenience things (new MOTD) and huge bump on PHP performance, especially noticeable on ruTorrent loading.

This has already been pushed to approximately 15% of nodes.
Rest will be on rolling release as usual, or on per user request.
If your node still has old version (noticeable on SSH MOTD for example if it's new) and you want newest version just open a ticket.

We do rolling release for quality assurance.]]>
<![CDATA[Power Up Your Performance with Our New MD4 and MD5 Dedicated Servers - It's Electrifying!]]> https://pulsedmedia.com/clients/index.php/announcements/607 https://pulsedmedia.com/clients/index.php/announcements/607 Tue, 18 Jul 2023 21:35:00 +0000
Our MD4 dedi, priced at an incredibly affordable 29.99€ per month, comes with an Intel i5-7500t 4c/4t 2.70/3.30Ghz processor, 16GB DDR4 RAM, and 500GB NVMe Storage. It's like a power-packed lunchbox, but for your data!

If you're looking for something with a bit more oomph, our MD5 dedi, priced at 39.99€ per month, offers the same powerful Intel i5-7500t 4c/4t 2.70/3.30Ghz processor, but with 32GB DDR4 RAM and a whopping 4,000GB NVMe Storage. It's like we've packed an entire buffet in that lunchbox!

Both series come with 1Gbps Unmetered + IPv4 Address and are located in Helsinki, Finland. Delivery is within 2 business days, so you won't have to wait long to start enjoying these powerhouses.

And here's the kicker: Intel Quick Sync makes these dedicated servers an EXCELLENT choice as a Transcoding Plex MediaServer, or Jellyfin Streaming Server with hardware transcoding. For transcoding, these truly do punch above their weight class big time!

So, are you ready to power up your performance with our new MD4 and MD5 dedis? We promise, it's going to be shockingly good!

Sign up for Your New MD Dedi at https://pulsedmedia.com/minidedi-dedicated-servers-finland.php

Best,
 The Pulsed Media Team

P.S. Remember, great power comes with great electricity savings! 😉]]>
<![CDATA[Billing outgoing e-mails issue being investigated [FIXED]]]> https://pulsedmedia.com/clients/index.php/announcements/606 https://pulsedmedia.com/clients/index.php/announcements/606 Tue, 11 Jul 2023 12:14:00 +0000
This is being investigated, but will take several hours for resolution.

Sorry for the inconvenience.]]>
<![CDATA[Billing site short downtime today sorted out quickly]]> https://pulsedmedia.com/clients/index.php/announcements/605 https://pulsedmedia.com/clients/index.php/announcements/605 Sun, 09 Jul 2023 09:39:00 +0000
This was due to a 3rd party administrative issue which caused it, easily solved and nothing to do on technical level.
Plans are already in place and execution has started to mitigate chances of this happening in future.]]>
<![CDATA[🌞 Embracing the Finnish Summer: A Little Snow, A Lot of Care, and Your Pulsed Media Experience 🌞]]> https://pulsedmedia.com/clients/index.php/announcements/604 https://pulsedmedia.com/clients/index.php/announcements/604 Fri, 30 Jun 2023 15:22:00 +0000
As the Finnish summer rolled in, we're reminded of the unique charm it brings - a fleeting season with just a sprinkle of snow, a testament to the beautiful diversity of our homeland. It's a time for rejuvenation, reflection, and yes, a well-deserved vacation.

We want to let you know that while we're soaking up the summer sun (and dodging the occasional snowflake), our commitment to you remains as steadfast as ever. We understand that you might notice a slight delay in our response times, but rest assured, we're still here, keeping a close eye on everything.

Our systems are continually monitored, and any widespread issues will be addressed with the same priority as usual. We're not just lounging by the lakeside (though we might be doing a bit of that too), we're also working hard behind the scenes to ensure your Pulsed Media experience remains top-notch.

Sometimes, we might be so engrossed in improving things for you that our responses take a tad longer than usual. But remember, every moment we spend now is to ensure your future with Pulsed Media is even brighter.

We appreciate your understanding and patience during this period. Your loyalty means the world to us, and we're committed to making your journey with Pulsed Media a rewarding one.

So here's to the Finnish summer - short, a little snowy, but filled with dedication and care from your Pulsed Media team.

Thank you for being a part of our family. Enjoy your summer too!

Best,
The Pulsed Media Team]]>
<![CDATA[Honesty Pays Off at Pulsed Media]]> https://pulsedmedia.com/clients/index.php/announcements/603 https://pulsedmedia.com/clients/index.php/announcements/603 Thu, 08 Jun 2023 07:00:00 +0000 Honesty Pays Off at Pulsed Media

We want to share a recent incident that highlights the importance of our community in helping us provide the best service possible.

One of our users, S.Y., recently discovered that he still had access to a previous 10G seedbox after his service period had ended. Instead of taking advantage of the situation, S.Y. promptly reported the issue to us.

This act of honesty not only helped us identify a potential issue in our system, but also reinforced the trust and integrity that forms the backbone of our community at Pulsed Media.

As a token of our appreciation, we rewarded S.Y. with a ~1month service credit. We believe in acknowledging and rewarding honesty, and this incident is a perfect example of that.

So, here's a big shout out to S.Y. and all our users who help us improve our services every day. Remember, your honesty and feedback are invaluable to us. Let's continue to make Pulsed Media a great place to be!

Stay tuned for more updates and improvements. As always, if you have any questions or feedback, feel free to reach out to us.

Happy seeding!

]]>
<![CDATA[Progress Update on Stability Issues with New Software Paradigm]]> https://pulsedmedia.com/clients/index.php/announcements/602 https://pulsedmedia.com/clients/index.php/announcements/602 Thu, 08 Jun 2023 05:34:00 +0000
The good news is that our debugging efforts are finally making progress! We're trying a new approach to get to the root cause of the problem. We could apply a quick fix by auto-rebooting the nodes affected by the issue, but that would only be a temporary solution. The root cause would remain, and we'd still experience a few minutes of downtime randomly.

We want to assure you that we're committed to finding a permanent solution. We're not ready just to apply the duct tape just yet! We're focusing on fixing the root cause of this issue, and we're making progress in our testing.

When you're on the cutting edge, there are always a few hiccups. But rest assured, we're learning a lot from this experience and it's showing us the path forward. We're committed to resolving this issue before we move on to the next stage.

Thank you for your patience and understanding as we work through these challenges.

Stay tuned for updates.]]>
<![CDATA[All Clear! Brief Billing Interruption Resolved and Future Improvements]]> https://pulsedmedia.com/clients/index.php/announcements/601 https://pulsedmedia.com/clients/index.php/announcements/601 Tue, 06 Jun 2023 08:50:00 +0000
Firstlyt, the brief hiccup we experienced with our billing server today has been resolved. We're back up and running! We're really sorry for the downtime - we know how important reliable service is to you.

At Pulsed Media, we're all about ensuring a seamless experience for you. But just like a well-rehearsed band can miss a beat, we had a slight off-key moment today. Our billing server, which is managed by a third-party provider, needed a short breather. We chose this setup to make sure that if one part of our system hits a bump, the rest keeps cruising along.

This provider has been our sidekick for the longest time, almost since our first day in the business. But, just like that friend who still forgets your birthday after 13 years (we all have one, don't we?), they sometimes miss sending out renewal notifications for all servers. It's a bit of a random event, and we totally get how it can be a bit of a surprise when the first notice you get is a suspension alert - or even your own monitoring system giving you the heads up.

And when it comes to punctuality, they're like a clock! If a server is just a few hours overdue, it gets a time-out. This has always been their way. We should be more careful this does not happen again.

We've been thinking about moving our billing server to a new home, partly because of this. We want to make sure that your experience with us is always top-notch, and we know these unexpected suspensions can be a bit of a downer. This is especially true for some of our customers who, for various reasons, have had to rely on this third-party service.

The Good News Is that a replacement provider has already been chosen, and we will change billing server provider at the same time we do some other enhancements to your experience interacting with billing & support. The new 3rd party is another large European provider with a good known track record, and all data will be transmitted securely with neither 3rd party being involved in the data transfer. Some things on this change will also increase security, not just reliability.

Disruption when we transition to the new billing server is expected to be a maximum of a few hours, with luck only ~30 minutes. The billing server upgrade has no strict ETA, but it needs to be completed by the end of this year. This upgrade is absolutely necessary for automating our dedicated server offers better, especially the new MD series of dedicateds at MiniDedi Dedicated Servers. These servers are targeting maximum efficiency, and Intel Quick Sync makes these dedicated servers an excellent choice as a Transcoding Server. For transcoding, these truly do punch above their weight class big time!

We're committed to making Pulsed Media your go-to choice for all your needs, and we appreciate your understanding as we work on improving our services. Your support means the world to us, and we promise to keep striving to make your experience with us better and better. Thanks for being part of our journey - we promise it's worth it!]]>
<![CDATA[MD Series Dedis Development Milestone Reached: Platform component selection COMPLETE. Incredible Efficiency]]> https://pulsedmedia.com/clients/index.php/announcements/600 https://pulsedmedia.com/clients/index.php/announcements/600 Thu, 01 Jun 2023 09:18:00 +0000 milestone in the development program of the MD series, "MiniDedicated" hardware platform.

The "node plate" side hardware component selection is now complete and has moved into final qualification phase. We expect only minor adjustments and fine-tuning to the platform in a iterative fashion as we scale up production.

In this most power efficient, highest density platform, we are now ready start software development to bring automation to you. Main software component selection has been already chosen, development required on interfacing/interoperability, documentation/structural and qualification.

This will be leading efficiency dedicated server platform by far, we do not expect there are many coming close to these level of efficiencies from start to finish. From acquiring the nodes all the way when Your Bits and Bytes leave the datacenter. Efficiency is a key factor each step of the way.

"Plate" Key Features in a Nutshell:
  • 1RU Form factor
  • Node scalability: Up to 16 nodes per 1RU, 672 nodes per 42U Rack
  • Power scalability: Up to 1000W per 1RU, 42KW Per Rack. PSU Options: 350W, 500W, 700W (Redundant), 1000W (Redudant)
  • Industrial Mean Well PSUs with High Power Efficiency of up to 94%
  • Built-in managed network switching, 10G Uplink per 8x1Gbit nodes (Upgradeable once new managed small form factor switches come available)
  • Power management per individual node, 0.5W standby power consumption, max 3.84W
  • Designed for "consumer grade" hardware without traditional BMC/IPMI features.
  • Supports efficient cooling for standard mini-ITX platforms, height can be increased conveniently for high power draw CPUs with big coolers (No definite limit on height)
  • Current version supports standard 100x100 VESA Mount and Mini-ITX mounting, future version will incorporate on as needed basis 4x4"/NUC, SBC mounts etc.
  • Designed for rack and datacenter level super efficient cooling, bringing cooling costs significantly down. BIG Fans, BIG Efficiency.
Rack level cooling still needs some work, but all main components are in stock or in freight, we expect zero hiccups on that since it is "just temperature controlled fans". Current design plan iteration consists of 24-56x 120mm industrial fans per rack, but later iteration will move to using larger than 120mm fans depending upon availability, characteristics etc.

Our current typical setup will utilize 8x35W nodes for maximum power draw of 280W on nodes, 20W on the managed switch, 4W on power management for total on-plate power draw maximum of 304W. Typical expected ~200W mark depending on load based on real world, in-production testing so far. Latest generation "35W" setups has been seen to consume up-to 90W in reality, typical max in the 60W ballpark.

Top Of The Rack (TOR) switch selected consumes only 400W giving 48x10G ports for the nodes itself and 2x100G Uplinks, giving each plate of 8 nodes average dedicated bandwidth of 4.17Gbps, or after error correction dedicated bandwidth of roughly 500Mbps per node! That also is industry leading, typical contention ratios in the industry are many times higher than 2:1 we have chosen to pursue.

This will make typical current generation full rack power consumption, rack level cooling included, of ONLY ~9100W for 336 NODES! 27.08W/Node.

ETA for full scale production is still unknown, aspirationally we would get all of this done by end of the year. It depends largely on when the new datacenter build is finished.]]>
<![CDATA[Datacenter 1 electrical maintenance this fall, 5th of September 2023 23:00-03:00]]> https://pulsedmedia.com/clients/index.php/announcements/599 https://pulsedmedia.com/clients/index.php/announcements/599 Tue, 30 May 2023 15:46:00 +0000 This coming september, Tuesday 5.9.2023 from 23:00 to 03:00 there will be electrical maintenance occuring which will necessitate to potentially turn servers offline to complete.

Some or all nodes will be down during this time, we will have a backup generator on-site but depending on the load we might not be able to run all of the servers off it.
Servers operated from the backup generator will necessitate two reboots as well.

We will know better once we have done more coordination with the building manager.

]]>
<![CDATA[Network maintenance scheduled for Wednesday around 10:00-14:00 Helsinki Time]]> https://pulsedmedia.com/clients/index.php/announcements/598 https://pulsedmedia.com/clients/index.php/announcements/598 Sat, 13 May 2023 09:09:00 +0000 Connectivity will likely not drop, but there will be flip flop on routes and most likely some congestion for this few minutes period.

Let's hope there isn't any issues and this is actually accomplished in the mere minutes.

One fiber has to be migrated to another route due to another DC doing renovations & maintenance, rebuilding their DC.

]]>
<![CDATA[Seedbox Software UPDATES! Massive Improvements, Rolling Out Now! Feedback Requested]]> https://pulsedmedia.com/clients/index.php/announcements/597 https://pulsedmedia.com/clients/index.php/announcements/597 Fri, 12 May 2023 09:22:00 +0000
This brings a lot of bug fixes, feature additions etc. depending on what was your current version on the server you are on. Some servers had some of these updates already, but not all.
So every server updated today gets something new or better. A few dozen was updated.

We will continue  normal rolling release from here on out after a week or so as we gather feedback on the potential regressions.

Some highlights are:
  • OpenVPN Support is back
  • Docker Rootless
  • WireGuard (via Docker Rootless) + Tons of other. Most LinuxServer.io containers should work.
  • Performance enhancements
  • Stability Enhancements
You can view the repo at https://github.com/MagnaCapax/PMSS

Let us know if you find regressions, either ticket or make an issue on github.

We are requesting people to make issues on github for bugs etc. things that need to be worked on.
So if there's a pet peeve, enhancement requests or something, head over to Github and make a issue out of it.

Alternatively, you can make a ticket as well.

Let us know what you think of this in our Discord or by a ticket.]]>
<![CDATA[Important Information Regarding Docker Containers and linuxserver.io Images]]> https://pulsedmedia.com/clients/index.php/announcements/596 https://pulsedmedia.com/clients/index.php/announcements/596 Sat, 08 Apr 2023 12:37:00 +0000 While most linuxserver.io containers should work seamlessly with our service, please note that we have not validated them. Therefore, if you encounter any issues, we encourage you to join our Discord channel and seek help from our fellow users.

Here are some tips to help you avoid issues related to Docker containers and linuxserver.io images:

  1. If you need to perform tasks inside the container with fake root privileges and UID mappings, you can use the rootlesskit bash command.
  2. For linuxserver.io images, set the PUID and PGID environment variables to 0. This should give you the expected UID on the host. For example: PUID=0 and PGID=0.

Using PUID + PGID set, you can prevent potential issues with Docker containers and linuxserver.io images changing the ownership of your directories so you cannot access them anymore from outside the container. As always, please exercise caution when making custom configurations and seek assistance if required.

]]>
<![CDATA[Misplaced IPs and Trigger-Happy Censorship: Our Wacky Adventure]]> https://pulsedmedia.com/clients/index.php/announcements/595 https://pulsedmedia.com/clients/index.php/announcements/595 Sun, 02 Apr 2023 11:47:00 +0000 Gather 'round for a wild tale of misplaced IPs, woke warriors, and the Importance of Free Speech.

Once upon a time in the land of Pulsed Media, some clever soul somewhere decided to assign one of our /24 IP ranges to Moscow, Russia. Little did they know that this seemingly innocent act would set off a chain reaction of kumbaya-singing, virtue-signaling censorship!

You see, ever since the war between Russia and Ukraine started in February 2022 people have demonized Russia, all the people of Russia. Every single one of them. Now some overzealous users of Cloudflare thought they'd single-handedly save the world by blocking everything from Russia. They're like internet superheroes, right? Wrong. Instead, they're unintentionally making the world a worse place by limiting access to information and playing right into the hands of those who want to control it.

Well, we've fixed the GeoIP issue, but Cloudflare's still lagging behind with their updates. Their support team can be harder to reach than the remote corners of Siberia.

To make sure this doesn't happen again, we've taken steps to secure our GeoIP data tighter than the Kremlin's secret stash of vodka. But honestly, we were stunned at how easy it was to fake this information in the first place.

As defenders of freedom of speech, we believe that access to information is the foundation of democracy. And if we want to fight the good fight, we should be getting more info to the people of Russia, not less. So here's a friendly reminder: censorship only hurts the good guys, and it's easy enough for the bad guys to bypass.

It's crucial to ensure everyone's access to information, especially in times of conflict, because an informed citizenry can challenge and resist the narratives that fuel senseless wars and loss of life. By providing alternative perspectives and promoting open discussions, we can create a space for empathy, understanding, and diplomacy to flourish. This, in turn, can lead to the de-escalation of tensions and help bring an end to tragic conflicts quicker. The power of free speech and uncensored information is not just a democratic ideal but also a practical tool for fostering peace and harmony in global community.

In conclusion, dear customers, stay vigilant, stay informed, and remember to always question the status quo. After all, the internet is a wild place, and you never know when you might accidentally become a Russian agent!

Stay awesome and uncensored.
- Your Pulsed Media Team

]]>
<![CDATA[Storage Seedbox offers updated]]> https://pulsedmedia.com/clients/index.php/announcements/594 https://pulsedmedia.com/clients/index.php/announcements/594 Tue, 28 Mar 2023 16:00:00 +0000
Currently 10Gbps RAID0 offers available from 4TB to 16TB. See options at https://pulsedmedia.com/storage-seedbox.php]]>
<![CDATA[Try Pulsed Media 10Gbps Seedboxes for Free with Our 14-Day MoneyBack Guarantee and Explore the World of qBittorrent and Deluge Seedbox Servers]]> https://pulsedmedia.com/clients/index.php/announcements/593 https://pulsedmedia.com/clients/index.php/announcements/593 Wed, 22 Mar 2023 16:25:00 +0000 Experience the power of Pulsed Media seedboxes for Free with our incredible 14-day money-back guarantee! You can test our seedbox trial without any risk, and discover the amazing features we offer, such as Deluge, qBittorrent and Docker. Our seedboxes are not only perfect for downloading and uploading content at lightning-fast speeds but also provide the flexibility to run say a Valheim server, nextcloud or any other application you desire. Your torrent downloads have never been this fast before, and neither has your torrent privacy neither.

Thanks to Docker integration, our high-end seedboxes enable you to run almost anything, including a remote desktop, with ease. Whether you are a gamer looking to set up a Valheim server or a developer needing a proxy servers for SEO, Pulsed Media seedboxes have got you covered.

Don't miss out on this fantastic opportunity to try our free seedbox and experience the benefits of a seedbox trial without any commitment. Sign up today, and unlock the limitless possibilities that Pulsed Media seedboxes can offer. Explore the world of torrent servers, dediseedboxes, and more – all at no cost for 14 days!

Check the V10G Seedbox series out: https://pulsedmedia.com/value10g-seedbox.php
Or for higher performance SSD 1Gbps Seedboxes: https://pulsedmedia.com/m1000-ssd-seedbox.php

Standard features include things such as:

  • Deluge
  • qBittorrent
  • Rclone
  • Docker
  • rtorrent / rutorrent
  • autodl
  • wireguard
  • Jellyfin
  • *ARR, ie. Sonarr, Prowlarr etc.
Some of these things will require manual installation, you can always ask in our Discord for help from your fellow torrent seedbox users! :)]]>
<![CDATA[M1000 SSD Extras are now available]]> https://pulsedmedia.com/clients/index.php/announcements/592 https://pulsedmedia.com/clients/index.php/announcements/592 Mon, 13 Feb 2023 10:13:00 +0000
Some of the higher end choices might lead to you getting in essence a dedicated server.
Contact support if that is the case.]]>
<![CDATA[Pulsed Media Seedbox Software Now On Github AND YES, You May Contribute!]]> https://pulsedmedia.com/clients/index.php/announcements/591 https://pulsedmedia.com/clients/index.php/announcements/591 Sat, 04 Feb 2023 15:08:00 +0000 https://github.com/MagnaCapax/PMSS

It's the first github version and we are working out the processes still.

Most important is that now you can easily raise issues, create pull requests etc.
Best contributions will get service credit rewards, and top contributors may opt for wiretransfer / paypal payments.

This software is now 13 years old, and has a lot of legacy. This was initially just Quick'n'Dirty solution which has grown far beyond those, and mainly has been developed without much software architecture design.
These really show and times have changed, a lot of refactoring remains to be done.]]>
<![CDATA[Automate Your Media Management with Pulsed Media's All-in-One *ARR + Jellyfin Script]]> https://pulsedmedia.com/clients/index.php/announcements/590 https://pulsedmedia.com/clients/index.php/announcements/590 Thu, 26 Jan 2023 15:29:00 +0000
This installation script was created by /u/Polawo and has been updated by one of our staff members, Egor, to make it even more user-friendly. With this script, you can automate your media management and keep your seedbox running smoothly.

To install, simply use the following command:

curl https://pulsedmedia.com/remote/pkg/arr_installation.txt | bash


This will run the script and install all the necessary *ARR scripts on your seedbox.

This combined with our support for Docker rootless makes Pulsed Media seedboxes a true powerhouse of flexibility with little effort. If you're interested in learning more about installing Docker rootless and Wireguard VPN, check out https://pulsedmedia.com/clients/announcements.php?id=587.

We are happy to inform that these features are enabled for ALL of our seedboxes, no matter which price level. However, we highly recommend that you experience the full potential of these features by upgrading to one of our higher-end services, such as Dragon-R or M1000 SSD.



We'd like to give a huge thanks to /u/Polawo for creating this script and to Egor for updating it. We hope this makes managing your media on your seedbox even more seamless and enjoyable.

As always, if you have any questions or issues with the script, please reach out to our support team for assistance.]]>
<![CDATA[Investigating Mysterious Freezes on Some of Our New Servers]]> https://pulsedmedia.com/clients/index.php/announcements/589 https://pulsedmedia.com/clients/index.php/announcements/589 Tue, 24 Jan 2023 07:21:00 +0000 There's been recent issues that we have been experiencing on some of our servers. As many of you may know, we recently moved to a new software and hardware paradigm due to the ongoing energy crisis. This move has brought many benefits in terms of performance and energy efficiency, but it has also brought some new issues, that we are currently working to resolve.

One of these issues is the occasional and unexpected halting of some of our servers. These are complete freezing of the system, with no hardware issues or errors reported. A reset is typically able to fix the problem, but we are working to find the root cause of the issue.

In some cases, the servers may show high CPU usage, but not always. A common characteristic of the issue is that no input of any kind is accepted on the console, and network and disk I/O throughput goes to zero. Sometimes, I/O metrics show high numbers just prior to the halt, but not always. There is no discernible pattern in terms of usage levels of the affected servers, with some experiencing high loads and others having low loads.

We are currently monitoring the issue and looking for information from others who may have experienced similar issues with similar configurations. We understand that this is a frustrating issue for our clients, and we want to assure you that we are working diligently to resolve it as quickly as possible.

We want to stress that it is much better to solve this issue by finding the root cause and fixing it, instead of relying on a watchdog to reboot the server. Each time a reboot is required there is a small chance of data corruption and other issues.

We apologize for any inconvenience this may have caused and we will keep you updated on the progress of our investigations. If you have any further questions or concerns, please don't hesitate to reach out to our support team.

Thank you for your continued support and patience.

]]>
<![CDATA[MINI Server Beta Testing, Phase 2: i7 7700t, 32GB Ram, 2TB NVMe! 27.99€/First Month -- OR -- i5 7500t, 16GB Ram, 2TB NVMe 24.49€/First Month]]> https://pulsedmedia.com/clients/index.php/announcements/588 https://pulsedmedia.com/clients/index.php/announcements/588 Thu, 19 Jan 2023 17:32:00 +0000 Awesome News! Feedback from the previous beta has been splendid and exceeded our expectations.
We are expanding this BETA with 16 more units.

8x of 2 different new models available!

Model 1:
CPU: i5 7500t
RAM: 16GB DDR4 Dual Channel
STORAGE: 2TB NVMe Gen3
NETWORK: 1Gbps Shared Unmetered
Price: 24.49€/First Month, Then 34.99€/Month
Delivery: 2-3 Weeks
Order Here: https://pulsedmedia.com/clients/cart.php?a=add&pid=295&promocode=miniserverBeta2301


Model 2:
CPU: i7 7700t
RAM: 32GB DDR4 Dual Channel
STORAGE: 2TB NVMe Gen3
NETWORK: 1Gbps Shared Unmetered
PRICE: 27.99€/First Month, Then 39.99€/Month
Delivery: 2-3 Weeks
Order Here: https://pulsedmedia.com/clients/cart.php?a=add&pid=296&promocode=miniserverBeta2301


This is beta? What does that mean?
There might be short outages and edge cases with issues, the full format has not been finalized yet. We will also expect good feedback on the units. We will do our best to communicate about potential maintenances etc which may be more frequent for these beta units than standard production would be.

]]>
<![CDATA[Docker Rootless & Wireguard Support Added!]]> https://pulsedmedia.com/clients/index.php/announcements/587 https://pulsedmedia.com/clients/index.php/announcements/587 Fri, 13 Jan 2023 17:46:00 +0000 Docker Rootless ++ Wireguard!

Still very much beta and only partially automated so far.
Now for every user docker rootless should automatically get installed, along with docker compose.
It is slowly automatically rolled out to all users.

To check if you have, in shell run:
docker run hello-world

If not installed you can easily install it yourself as well:
curl -fsSL https://get.docker.com/rootless | sh
echo "export PATH=~/bin:$PATH" >> .bashrc
source .bashrc
systemctl --user enable docker

Installing docker-compose:
Check newest version available:
curl -s -I https://github.com/docker/compose/releases/latest | awk -F '/' '/^location/ {print substr($NF, 1,
length($NF)-1)}'
Sample Output:
$ curl -s -I https://github.com/docker/compose/releases/latest | awk -F '/' '/^location/ {print substr($NF, 1, length($NF)-1)}'
v2.14.2

Download newest version of docker-compose into your ~/bin directory:
wget https://github.com/docker/compose/releases/download/<VERSION>/docker-compose-linux-x86_64 -O ~/bin/docker
compose
Make the file executable: chmod +x ~/bin/docker-compose
Check if docker-compose works: johndoe@pmss:~$ docker-compose version Docker Compose version v2.14.2
Good to go!

Limitations of rootless docker: https://docs.docker.com/engine/security/rootless/#known-limitations


Now to fun part, Wireguard!

Wireguard docker container installation and configuration
source and image: https://github.com/linuxserver/docker-wireguard

The container with the wireguard is set up with docker-compose which uses a docker-compose.yaml configuration file.
Here’s a docker-compose.yaml template:
---
version: "2.1"
services:
wireguard:
image: lscr.io/linuxserver/wireguard:latest
container_name: wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=#YOUR PUID
- PGID=#YOUR PGID
- TZ=Europe/Helsinki
- SERVERURL=#YOUR HOSTNAME
- SERVERPORT=51820 #PORT NUMBER
- PEERS=3
- PEERDNS=auto
- INTERNAL_SUBNET=10.13.13.0
- ALLOWEDIPS=0.0.0.0/0
- LOG_CONFS=true
volumes:
- #Path to the config folder:/config
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp #Port number
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped

There are some things one needs to change:
Replace YOUR PUID and YOUR PGID fields with your user’s GID and UID. You can find them using id command: john@server:~/$ id
uid=1000(john) gid=1000(john) groups=1000(john)

YOUR HOSTNAME is replaced with hostname of your server The default port number is 51820 but that might conflict if several users set up a
wireguard server. So, if container installation fails, try another port.

PEERS variable sets the number of clients your wireguard will support. For each peer the server will generate a pair of keys to encrypt the
connection.

Also, you need to set up a folder for all the wireguard’s configs and specify path to it. For example, one might create a new folder at ~/.config
docker-wireguard and use that as the config folder for the server.

Don’t forget to specify the port number again and we are good to go!

Launching the container
Place the docker-compose.yaml in a separate folder and run docker-compose up -d in the same folder.

The installation process will begin. Docker should download and install everything automatically.

You should see something like this:
john@server:~/wireguard$ docker-compose up -d [+] Running 8/8 ⠿ wireguard Pulled
45.2s ⠿ 8a6b84e63e3d Pull complete 4.2s ⠿ 665a26860e09 Pull complete 5.5s ⠿ e5afe0e25c04 Pull complete 6.7s ⠿
b0dc43af3c2f Pull complete 8.8s ⠿ 90fe4b5ce983 Pull complete 10.5s ⠿ 69a0a7952709 Pull complete 31.3s ⠿
61c31956b36d Pull complete 41.1s [+] Running 1/1 ⠿ Container wireguard Started 19.2s

Now the server is up and running!

Configuring clients
Mobile apps
Setting up wireguard client on the phone is quite easy. You can just scan a QR code with your app and the tunnel should be set up. You can get
the QR code using the command docker container exec wireguard /app/show-peer X where X is replaced with a peer’s number.
(Numbering starts from 1)

Windows
You can get a config text you need to paste into wireguard client using the following command: docker container exec wireguard cat
config/peerX/peerX.conf where X is replaced with a peer’s number. (Numbering starts from 1

]]>
<![CDATA[Increased Add Funds Limits Now Available - Automate Your Invoice Payments!]]> https://pulsedmedia.com/clients/index.php/announcements/586 https://pulsedmedia.com/clients/index.php/announcements/586 Tue, 10 Jan 2023 08:37:00 +0000 We are excited to announce that we have increased the maximum amount of funds that can be added to your account. Due to inflation and the changing nature of our services, which now focus more than before on high-end dedicated servers with 10G networks, we felt it was necessary to update our limits.

You can now add up to 1000€ with a single invoice and the maximum balance for credits is now 5000€. This will allow you to easily automate your invoice payments upfront and ensure that your services remain uninterrupted.

We encourage you to take advantage of this feature by visiting our client area at https://pulsedmedia.com/clients/clientarea.php?action=addfunds and adding funds to your account today.

As always, please don't hesitate to reach out to us if you have any questions or concerns. We appreciate your continued support and look forward to serving your hosting needs in the future.

]]>
<![CDATA[Don't forget to claim your Black Friday / Cyber Week Give Away prize!]]> https://pulsedmedia.com/clients/index.php/announcements/585 https://pulsedmedia.com/clients/index.php/announcements/585 Mon, 09 Jan 2023 06:43:00 +0000 We hope you had a wonderful Black Friday and Cyber Week shopping experience with us!

As a reminder, if you ordered during the promotional period and are from Europe, you are eligible to claim your Give Away prize. This includes T-Shirts, HDDs, and other IT hardware, as well as an AMD Radeon RX GPU.

Don't forget to visit https://blog.pulsedmedia.com/2022/12/happy-holidays-2022-and-a-look-into-present-and-future/ for full information on how to claim your prize.

Thank you for your business and we hope you enjoy your Give Away prize!

]]>
<![CDATA[Happy Holidays 2022 And A Look Into Present And Future]]> https://pulsedmedia.com/clients/index.php/announcements/584 https://pulsedmedia.com/clients/index.php/announcements/584 Sat, 24 Dec 2022 21:22:00 +0000
Continue reading at https://blog.pulsedmedia.com/2022/12/happy-holidays-2022-and-a-look-into-present-and-future/

]]>
<![CDATA[End Of Sale Notification: Value1000 (V1000) Series]]> https://pulsedmedia.com/clients/index.php/announcements/583 https://pulsedmedia.com/clients/index.php/announcements/583 Fri, 16 Dec 2022 18:28:00 +0000 <![CDATA[rTorrent Seg Faults recently, issue fixed and propagating]]> https://pulsedmedia.com/clients/index.php/announcements/582 https://pulsedmedia.com/clients/index.php/announcements/582 Fri, 16 Dec 2022 10:25:00 +0000
Turns out there was issues with it's handling of libcurl. We just updated the libcurl version to newer and that has solved the issue for those with constant crashes.
This update is now being propagated automatically to all servers slowly in a rolling fashion. Roughly a week or so before rolled out to all servers.

]]>
<![CDATA[Black Friday Raffle: +1 Month to service for ALL of the Raffles]]> https://pulsedmedia.com/clients/index.php/announcements/581 https://pulsedmedia.com/clients/index.php/announcements/581 Thu, 15 Dec 2022 12:33:00 +0000
Therefore we are adding the second month completely FREE for every raffle still active!

This has already been applied. Open invoices will be shortly cancelled.

Happy Holidays!]]>
<![CDATA[Black Friday Raffle: 2nd month discount not applied automatically for some]]> https://pulsedmedia.com/clients/index.php/announcements/580 https://pulsedmedia.com/clients/index.php/announcements/580 Wed, 14 Dec 2022 09:34:00 +0000 There's been a software glitch and the 2nd month discount is not applied for some people.
Sometimes the promotion code is not applied to the service permanently, but should be since the initial order was discounted.
This results in the 2nd month being regular price for those affected.

If this is the case for you, please open a ticket and it will be taken care of and the discount applied manually by hand.

]]>
<![CDATA[MINI Server Beta Testing: Core i5-7500T, 16GB RAM, 256GB Kingston NVMe - 19.99€/First Month]]> https://pulsedmedia.com/clients/index.php/announcements/579 https://pulsedmedia.com/clients/index.php/announcements/579 Sat, 10 Dec 2022 21:21:00 +0000
These are fully beta test units still, so special condition is that if we deem these not to function as expected we may cancel the service after first month. Further there might be outages if we need to do physical changes.
If all goes well you get to keep this server tho.
We also expect some feedback.

So far testing has gone nicely.

Server Specs:
Core i5-7500T
16GB RAM
256GB Kingston NVMe
1Gbps Unmetered

Price: 19.99€ first month, then 29.99€
Delivery: 2 business days.  (Nodes are already online and OS has been installed)

Only 4 remains available from beta test units rest is already allocated.

Order Here: https://pulsedmedia.com/clients/cart.php?a=add&pid=291&promocode=miniserverBeta2212


UPDATE: Extended to 8 public beta test units.]]>
<![CDATA[Real power consumption is now ~-54,4% lower from July average]]> https://pulsedmedia.com/clients/index.php/announcements/578 https://pulsedmedia.com/clients/index.php/announcements/578 Sat, 10 Dec 2022 12:51:00 +0000 As older nodes can be finally shutdown, the real consumption difference is finally happening.
Part of the reason is less cooling required during winter, but most of this is about the server upgrades.

So we have already managed a whopping -54,4% reduction in power consumption, but our work is far from finished yet.

Electricity prices are skyhigh right now, we are fully expecting to reach record costs for December, January, February consecutively, March might see some leveling off.
At current rates our electrical cost is expected to be roughly 30% of our net revenue for the month of December, January and February could reach as high as 70% and March would probably be around 50%
Without all these efforts to lower power consumption month of December could already see electricity cost alone reaching approximately 65-70% of our net revenue, and months of January & February expected to be 140-150%, March around 110% of net revenue.


Current figures, parenthesis from last announcement and from the first publication of these metrics:
8.67W/user (-1,28W, -4,7W)
2.26W/TiB Storage (-0.21W, -1.13W)
3.75W/TiB Allocated storage (+0,96W, +0.34W)

Power per TiB allocation has increased so dramatically due to all the unused capacity right now. As migrations progress this number will get lower.

Only new server metrics currently, parenthesis is comparison to all servers:
6.85W Per User (-1.82W)
1.95W Per TiB of Storage (-0.31W)

Older servers to be migrated metrics, parenthesis comparison to new servers:
11.42W Per User (+4,57W)
3.12W Per TiB of Storage (+1.17W)

So clearly there is still a lot of optimization work to be done.

As soon as users are migrated to newer hardware we will start thinking about how our future service lineup will look like.

]]>
<![CDATA[Short network issue on one of the TOR switches, affecting single rack]]> https://pulsedmedia.com/clients/index.php/announcements/577 https://pulsedmedia.com/clients/index.php/announcements/577 Thu, 08 Dec 2022 05:43:00 +0000 This caused a short downtime for one of the TOR switches.

So in essence human error for not double checking, but then again, it's also a software issue when single interface going down on a LAG group brings down the whole LAG group.
This failure mode will be removed with router upgrade, which is in the plans.

If you are sysadmin / network admin, you will probably find this kinda funny. Device supposed to be as failure proof as possible having this kind of a glitch.]]>
<![CDATA[Network maintenance scheduled for today]]> https://pulsedmedia.com/clients/index.php/announcements/576 https://pulsedmedia.com/clients/index.php/announcements/576 Mon, 05 Dec 2022 14:19:00 +0000 We have to do some network maintenance, this might include having to reboot our edge router.

These can cause intermittent issues, or even whole network outage for a few minutes at a time. We will try to keep updated as the work continues, but the actual work takes priority over updates.

We estimate the biggest work to take less than 30minutes which might cause intermittency on whole network.
Rest of the work is isolated to small areas of the network and might last until tomorrow morning.

UPDATE: We have identified 2 issues, one of which is highly actionable, and second one which requires more investigation. Follow ups on network status page https://pulsedmedia.com/clients/serverstatus.php

]]>
<![CDATA[Current power usage metrics on per user and per TiB basis]]> https://pulsedmedia.com/clients/index.php/announcements/575 https://pulsedmedia.com/clients/index.php/announcements/575 Sun, 27 Nov 2022 14:20:00 +0000 It's been slower progress than before as we have reserved the capacity for black friday / cyberweek sales.

Still, progress has been made.
We expect to continue migrations soon, and that will make a huge difference on the W/User metric.

Current figures, parenthesis from last announcement and from the first publication of these metrics:
9.95W/user (+0,10W, -3,42W)
2.47W/TiB (-0.55W, -0.92W)
2.79W/TiB allocated (-0,23W, -0.62W)

]]>
<![CDATA[Some new servers with self-signed SSL Cert -> Let's Encrypt Limits met]]> https://pulsedmedia.com/clients/index.php/announcements/574 https://pulsedmedia.com/clients/index.php/announcements/574 Sat, 26 Nov 2022 17:24:00 +0000
We are looking at alternatives for future so we do not hit these limits again, but those changes will happen at earliest Q1/2023.]]>
<![CDATA[M10G Series Upgraded To ALL AMD EPyC or Ryzen!]]> https://pulsedmedia.com/clients/index.php/announcements/573 https://pulsedmedia.com/clients/index.php/announcements/573 Sun, 20 Nov 2022 18:09:00 +0000 Most will goto the latest standard EPyC with Hybrid SSD cached storage as well.

Remaining old users on non EPyC or Ryzen servers will be upgraded and migrated within months to these upgraded servers.

Performance has increased very significantly for these.]]>
<![CDATA[New migration script which attempts to migrate absolutely everything]]> https://pulsedmedia.com/clients/index.php/announcements/572 https://pulsedmedia.com/clients/index.php/announcements/572 Tue, 15 Nov 2022 08:15:00 +0000
So we created new migration which copies everything, but items on exclusion list.
This is still beta, and might break users badly -- but it gets almost everything now.

If you were lately migrated and want us to relaunch migration for you with the new script, please contact support.]]>
<![CDATA[Some 10Gbps and 20Gbps dedicated servers available]]> https://pulsedmedia.com/clients/index.php/announcements/571 https://pulsedmedia.com/clients/index.php/announcements/571 Thu, 10 Nov 2022 20:36:00 +0000 10G UNMETERED DEDIS
* Xeon X5660 6c/12t, 144GB ECC RAM, 4x16TB: 270€ a month (1 available)
* Xeon X5660 6c/12t, 72GB ECC RAM, 4x14TB: 250€ a month (many available)
* 2*Xeon L5630 8c/16t, 48GB ECC RAM, 6x8TB: 230€ a month  (2 available)
* 2*Xeon L5630 8c/16t, 96GB ECC RAM, 6x8TB: 245€ a month (1 available)
* 2*Xeon L5520 8c/16t, 144GB ECC RAM, 4x10TB: 245€ a month (1 available)
* Xeon L5640 8c/16t, 48GB ECC RAM, 4x8TB: 220€ a month  (many available)
* Xeon L5640 8c/16t, 96GB ECC RAM, 6x8TB: 270€ a month  (many available)
* 2*Xeon L5630 8c/16t, 96GB ECC RAM, 6x10TB: 255€ a month (1 available)

Available immediately and until esimated 17/11/2022

EPYC 20Gbps Server #1:
AMD EPyC 7551 32core/64thread, 128GB DDR4 ECC, 2x SX8200 256GB NVMe (Boot), 12x 16TB 7200rpm HDD
10G Unmetered: 439€/Month
20G Unmetered: 689€/Month
Available immediately

EPYC 20Gbps Server #2:
AMD EPyC 7001 32core/64thread, 128GB DDR4 ECC, 2x 970 Pro 512GB NVMe (Boot), 12x 16TB 7200rpm HDD
10G Unmetered: 449€/Month
20G Unmetered: 699€/Month
Available from end of november.


Setup within 1 week of paid order.

Contact sales to order one of these.

]]>
<![CDATA[Progress on re-organizing for lesser power consumption AND more performance is progressing faster and faster]]> https://pulsedmedia.com/clients/index.php/announcements/570 https://pulsedmedia.com/clients/index.php/announcements/570 Thu, 10 Nov 2022 13:01:00 +0000 Immense progress in just one week, this really shows the "S-curve" in action, rather slow start on changes but it's getting faster and faster.

Today we are calculating - parenthesis for 1 week change and from first announcement:
9.85W/user (-1,24W, -3,52W)
3.02W/TiB (+0.03W, -0.37W)
3.02W/TiB allocated (-0.39W)

We are very near 100% provisioned now, no more under provisioned servers! :) "The Fat Has Been Trimmed". Curbing on abusive users has been very significant step towards this, yes those trying to allocate 10x+ of server RAM, or doing 500+ connections PER torrent, or PER rclone transfer, along with server optimizations which have worked brilliantly. The end result is that finally all the servers can be fully provisioned again AND we even saw network bandwidth utilization record already despite a lot of migrations undergoing.

There is a small lag on realizing these in the real world as servers under migration has already been removed from the calculation but are yet to be shutdown, this lag is 1-2 weeks typical.

Some servers are ready to be deployed right now, and these will drop these ratings significantly.

]]>
<![CDATA[Dragon-R: Almost sold out, more servers planned to come online within 2 weeks]]> https://pulsedmedia.com/clients/index.php/announcements/569 https://pulsedmedia.com/clients/index.php/announcements/569 Wed, 09 Nov 2022 13:10:00 +0000
More servers has been planned to be installed within 2 weeks.
These servers are already racked, but waiting for shipment of some accessories only.]]>
<![CDATA[Good progress on power consumption per user]]> https://pulsedmedia.com/clients/index.php/announcements/568 https://pulsedmedia.com/clients/index.php/announcements/568 Thu, 03 Nov 2022 19:46:00 +0000 While upgrading servers we have already managed to enhance power consumption by leaps and bounds.

We are currently seeing 11.09W per user and 2.99W per TiB of storage. Per allocated unit of storage we are measuring 3.41W/TiB.
Measurements are from in-production seedbox servers, and will come to fruition at the pace old servers are shutdown 1-1.5 weeks after the migration to new hardware typically.

These are quite big differences from our previous announcement about this, just a few weeks back as the plans come to fruition.
Most interesting thing is that users are actually getting faster and better experience as the hardware is being upgraded, this is due to the shift in hardware & software paradigms. We see now a roadmap to further pursue decreased power consumption while increasing service quality. It is a lot of work, and needs a lot of expensive new hardware but the roadmap is clearing up.

]]>
<![CDATA[Dragon-R expected to be back in stock by end of this week (UPDATE)]]> https://pulsedmedia.com/clients/index.php/announcements/567 https://pulsedmedia.com/clients/index.php/announcements/567 Wed, 26 Oct 2022 19:51:00 +0000
New servers will be in production by end of this week.

See the offers at: https://pulsedmedia.com/dragon-r-20gbps-rtorrent-seedboxes.php

Update: Delayed by several days.]]>
<![CDATA[Our users are 1337]]> https://pulsedmedia.com/clients/index.php/announcements/566 https://pulsedmedia.com/clients/index.php/announcements/566 Fri, 21 Oct 2022 09:03:00 +0000
There are a lot of legacy servers as well which pushes these wattages a bit higher.

Progress has started to change this.]]>
<![CDATA[Enhancing service performance AND performance stability; A stop to RAM and CPU abuse]]> https://pulsedmedia.com/clients/index.php/announcements/565 https://pulsedmedia.com/clients/index.php/announcements/565 Wed, 19 Oct 2022 17:26:00 +0000 RAM usage, and sometimes CPU usage abuse has become very rampant this year.
Many users have been consuming 50%+ of server RAM.

We have thus introduced now strict memory limits; SOFT LIMITis set to the advertised maximum and Hard Limit Is Double of advertised RAM for now as people settles in.
Also some CPU usage metrics are being now shown and "tasks" (processes)

You can see your RAM usage on welcome tab below quota & traffic, and the extra info on the Info tab, start of stats.

These actions should start ensuring performance stability despite abusive users present on the same system.

Several dozen servers has been updated today for this.

At the same time general software update + new kernel for enhanced performance.

]]>
<![CDATA[Our website was DDOS attacked today]]> https://pulsedmedia.com/clients/index.php/announcements/564 https://pulsedmedia.com/clients/index.php/announcements/564 Sat, 15 Oct 2022 14:14:00 +0000
This has been mitigated now, sorry for the few minutes of inaccessibility or slow access to our website you may have experienced.]]>
<![CDATA[Helsinki datacenter: Electrical maintenance (Finished)]]> https://pulsedmedia.com/clients/index.php/announcements/563 https://pulsedmedia.com/clients/index.php/announcements/563 Mon, 03 Oct 2022 14:07:00 +0000
ETA to finish is under 30minutes]]>
<![CDATA[Changes to Bonus Storage, balancing more towards euros paid as an inflation adjustment.]]> https://pulsedmedia.com/clients/index.php/announcements/562 https://pulsedmedia.com/clients/index.php/announcements/562 Fri, 30 Sept 2022 15:09:00 +0000
Hence we made following changes:
 * 0.5% for each month since service creation, was 1%
 * 0.1% for each month since billing profile creation, was 0.2%
 * 1% for each 62.50€ paid, was 50€

Rest remains the same, this moves balance slightly towards those with higher end services. Inflation is calculated from roughly average of total inflation vs industrial producer inflation, therefore 25% is the rough average.
Remember that if you have multiple services the euros paid is multiplied across all of them, same with billing profile age. See the examples in wiki.

These changes also ensure slightly more even distribution.

Average user currently has 34.21% of bonus storage.

You can check full information of the bonus storage program at: https://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy]]>
<![CDATA[New service signup pricings updated, electricity price calculations and information]]> https://pulsedmedia.com/clients/index.php/announcements/561 https://pulsedmedia.com/clients/index.php/announcements/561 Fri, 30 Sept 2022 13:29:00 +0000 Only a few survived without big markups, those with highest efficiency.

Many of the servers we use now cost in electricity as much as 77€ a month EACH and that's just for a 4 drive server, and these used to be merely 16€/month just few months ago, almost 5x increase. Some servers are even the 200€/month range just for power alone.
Even the lowest power systems where power barely entered into equation are now more than 12€ a month on average use on power usage, the ZEN MiniDedis which we have sold plenty at 20-25€ price range.

How did we end up with these prices?
Where our current electricity costs have "merely" almost quadrupled so far, that's not the only cost. Cooling adds a multiplier to that cost, and we had to estimate slight increase to today's pricing as well. Hence going from raw cost of say 0,10€/kWh and with PUE of 1,25 the actual price was 0,125€/kWh, that is an additional 0,025€/kWh for cooling. Now if you goto 0,35€ + 25% = 0,4375€/kWh -- now the cooling addition is 0,0875€/kWh. We took the expected electricity costs for this week and added few % on top as safety margin. We do however expect prices to go even much beyond this over the winter. The market is now again peaking at 0,70€/kWh.

How is electricity price formulated?
We take the raw cost and then add all the infrastructure and maintenance related for power delivery. For example, we take the chillers with their expected lifetime and annualized expected maintenance costs.
We then add the power consumption of cooling annualized. Finally, we divide these total costs with the production capacity to get to kWh rate. We run highly efficient datacenter, but even then it's quite an multiplication since during summers we do have to run chillers.


]]>
<![CDATA[Electricity prices continue to climb. Expect server migrations in near future]]> https://pulsedmedia.com/clients/index.php/announcements/560 https://pulsedmedia.com/clients/index.php/announcements/560 Tue, 13 Sept 2022 13:14:00 +0000
It is quite certain we will see another +50% next month.

----

Hardware is on order, and we should be receiving in a week or two them, we will start migrating users to upgraded servers as soon as we can.

Some migrations may start happening already this week for server upgrades.]]>
<![CDATA[Inflation, Energy crisis and exchange rate changes are disruptive]]> https://pulsedmedia.com/clients/index.php/announcements/559 https://pulsedmedia.com/clients/index.php/announcements/559 Sat, 03 Sept 2022 12:25:00 +0000 Pulsed Media: Inflation, Energy crisis and exchange rate changes are disruptive

Pulsed Media is not immune to market conditions, and the market conditions have dramatically changed over the past year. Energy prices started to heavily increase a year ago, but are now wildly changing week by week. Market pricing indicates a significant multiplication to electricity pricing with some people already receiving upto 10x the previous rates and 5x has become common place.

Our electricity price has already nearly doubled and we are afraid it could more than quadruple from that, to potentially 8x or more from the price we've enjoyed for the past 6 years in our current datacenter. Electricity price is, and has been one of the larger costs, but now it is already the largest. Some people are already receiving quadruple electricity price of what is our current already increased rate, and the best deals you can find right now are way more than quadruple of our original electricity cost.

There has also been discussions about potential power outages during the upcoming winter. How long and to what level, if any is yet to be seen. Currently Finland is on electricity deficit and has to rely on imported electricity, so a period without wind and extreme cold we could see power outages. Our datacenter is in Helsinki and next to a power plant, but even that will not guarantee we will not see power outages.

Electricity has always been a big part of our cost structure, and competitive electricity price and efficient cooling has meant that we have leaned very heavily on that advantage we've had in the past. Things have now changed with the energy crisis, and we are looking at very high electrical costs.

Our electricity is being bought in bulk, in combination with many other companies and for the long term, but even that manner is not immune to the market conditions -- typically this way of buying is a little bit more expensive than exchange market spot electrical rates -- but was stable and predictable. On the other hand, when electricity prices start going down our cost goes down slower than the overall market. Current public offers for electricity are more than 4x we used to pay already, with some companies charging even vastly more. Over 0.50€/kWh priced offers are increasingly common by the week.

Europe is not alone being affected by the energy crisis. The whole global economy is affected, albeit to a lesser degree than Europe.



Inflation, exchange rates and supply shortages

Inflation is very high as well, all operational expenses (OpEx) have increased over the past 2 years. Some costs have increased dramatically, not just energy costs. Official numbers hover around the 9% mark, Euribor 12month interest rate is going higher at a very rampant pace.

USD-EUR Exchange rate has dramatically changed this year, meaning that all hardware costs now roughly 15% more. Every service with USD basis in pricing costs that much more now. The good news is that those with USD as their billing currency are essentially now getting a 15% discount.

Finally we have supply shortages for some components, some things simply are not available at all, at any cost. In some cases we can find those parts for highly inflated prices, more than 2x the typical. Sometimes you hope to even find a "scalper" with the product you need in stock.

Our target market of low end, extremely good value for our customers has necessitated very precise calculations of all costs. A very precise pricing structure to be viable and profitable business for future expansion and to create ever better value proposition services year over year.

 

Some of the cost increase highlights over the past 2 years

* Electricity +~89% Estimate for Q4/22-Q1/23 is 400-800%
* Staffing +~50%
* Networking/Transit: +~23%
* Fuel costs: +~60 - 100% (fluctuates heavily)
* Buildings/Spaces: +~90%
* Accounting/Admin overhead: +~75%
* Hardware costs due to currency exchange rates: +~15%


We have managed to find some efficiencies as well which has boosted revenue without accompanying cost increases, but these things are not able to counteract even for all the inflation alone, nevermind energy crisis combined.



Price increases for services

Sadly, due to all these factors we have to increase pricing all around, for new and pre-existing customers. These will come in effect immediately or within a few a days. New pricing will be in effect on renewals and new orders, no mid payment term change. If you have an invoice already open for renewal that will be unaffected. New signup pricing takes a little bit longer to change as we have to re-do all the expense maths for all of the service ranges, these will happen by end of September, or with rough estimates before that.

Relative to revenue most expensive accounts are the smallest ones. The sub 5€ group of services represent about ~31% of services, but only ~10% of revenue, all the while being the source of about 80% of uncommon events. Seedboxes are highly dependant on the number of users per performance domain (ie. a server, or raid array) catering to this group actually increases production costs as well. On the other hand, the higher end service group is the complete opposite, therefore we will not increase pricing for that group as much but actually significantly less than actual increase in the operational costs (OpEx).
Therefore the pricing increases will be linearly adjusted based on service price 35-15% on 10-30€ services -- lower percentages on higher cost service, but also another 3.5€ for all sub 5.50€ accounts which are the most sensitive to OpEx changes.

The final pricing increases will be 35% to 15% scaling from 10€ to 30€ per month. Examples:

* 10€ current price, +35%, final price from 10€: 13.50€
* 20€ current price, +25%, final price from 20€: 25.00€
* 30€ or above current price, +15%, final price from 30€: 34.50€

This price increase will go to cover the increased operational costs.



Future plans and energy crisis mitigation options for the long term

Meanwhile we are starting a program to really start heavily lowering our electrical consumption. Several plans are already being executed, and we will get outside funding to achieve these goals if we have to. Pulsed Media is net debt free company currently and has been revenue funded until now.

Some of the plans are still aspirational, so we'll keep it brief. So keep this in mind, that some of these are currently aspirational and it takes time to move things to a new direction. Think it as a freight ship, even if you turn the rudder all the way now it takes a lot of time for the freight ship to even start turning around. Some of these plans has been brewing for more than a year now, so progress has already started on them.

First we got our second datacenter keys in August, once we build this out we are planning to install solar panels and the building owner is interested on buying our waste heat during the winter. This new datacenter is not very large neither, we are expecting roughly 400kw final capacity. Solar panels need some bureacracy, and we are not yet certain the scale we can build, but aspirationally we would like to get to somewhere around 600kw (+50% over the load) solar panel installation over the upcoming years, depending on the net metering we are able to negotiate. This site will have extreme economizer airflow volume capable of cooling more than 200kw @ 22 Delta-C by itself alone during hottest days of the summer. This will take significant time to develop, with multiple contractors involved and will be built out in phases. We expect very small production volume with compute only centric customers to begin during this winter.


Second, and easier to achieve. Extreme cost cutting by upgrading servers. Newer platforms consume less energy, while giving higher performance. No news there at all, but we are measuring in some cases even 110W per 1U server savings in electricity consumption. Secondary, we are looking into advanced storage setups such as effective NVMe caching right now and already have first servers up and running, this would allow us to have bigger performance domains // more users per performance domain. So far the test results have been _very_ promising for both upgrading to newer platforms and NVMe caching.

Newer platform will directly cut electrical consumption, but will still take years to pay off. NVMe/SSD caching would allow us to have more users per performance domain, while increasing performance. HDD performance, especially random performance, does not increase as the capacity increases so only way to increase number of users per HDD is by caching, or smaller performance domains (next item).

We have already started this process, but will likely take years to completely achieve.


Third, we are looking to partially virtualize and use bigger beefier servers split into multiple virtual machines for performance domain isolation. Therefore saving a lot in electrical costs per unit, while increasing performance. These were already on our internal development roadmap since earlier this year, and several very large storage servers are already in order with one of our server suppliers. This ties with the above hardware upgrades to further cut electricity expenses. These will first rollout to the Dragon-R tier of services (20Gbps RAID10).


Fourth, some legacy services -- really old services where storage capacity for end user is small we are looking to upgrade to SSD based service. These will account only for about few percent of our total electricity consumption however. Will also require hardware investments, therefore this is a minor item and will be done at the speed of convenience.


Fifth, we are going to beta test using regular consumer motherboards for regular servers like some very well known large brands in the dedicated server niche. We will begin with 2U-4U size for easier cooling. Currently only consumer motherboards we employ as servers is the ZEN MiniDedi lineup used for M1000 SSD seedboxes. They are absolutely brilliant for the task, but saddly currently out of stock everywhere. This has a big potential for long term performance increases and therefore decreased electricity consumption.


Sixth, UPS upgrade. We use Eaton PowerWare UPS systems currently. These consume a lot of electricity even when idling. Upgrading to modern transformless hybrid inverters could potentially pay themselves off just in the increased efficiency, let alone using lithium batteries instead of lead acid. Lead acid needs to be replaced every few years at great expense, and can be trusted only for approximately 50% of the rated capacity. Further, this should greatly increase reliability. This move would not make sense without huge increases in energy costs. This could also help if power outages happen, but sadly currently the USA to Finland sea freight is rather slow as well so there is no way we can get the lithium batteries imported by this winter. Typical normal UPS has battery packs for mere minutes of operation, and looses some of the extremely limited capacity each time they are drained. Earliest we can do this replacement looks like will be during next summer.

We might be forced to take the legacy type UPS units completely offline during upcoming winter if we get a really bad electricity rate, there is no sense running UPS when it cannot last over a blackout, and consumes huge amounts of energy. Our own experience and other datacenter owners all report that these units tend to cause more downtime than they save from, by a big margin. Hence, we were already considering changing them -- but that does require shutting down some devices for the duration it takes to replace the units.


Finally, we have to change the whole modus operandi based on increased OpEx and the risks associated with current crisis' -- at least until we get our new datacenter running at capacity and with sufficient solar arrays. This means we are going to target services a bit higher end, semi-dedicated and such as this shifts from OpEx centric to capital expenditure (CapEx) centric cost structure. Unfortunately, that means the days of sub-10€ services might be gone for a while for the most part. Or we find new ways to control resource consumption to acceptable level on the entry level packages, which is not a trivial task on seedboxes which are really hungry for those juicy IOPS! (IOPS = Input/Output oPerations per Second, storage performance).

Rest assured we are working hard on finding new efficiencies and keep providing ever better services year after year. Necessity is the mother of invention.



Not all gloom and doom

It's not all gloom and doom; We expect performance to increase on our services gradually as these plans get into motion. As part of the course for the solution to this energy and inflation crisis a lot of old time users will get service upgrades along with the hardware upgrades as well. This has been standard operating procedure, but never advertised or publicly mentioned before, to upgrade old time users after several years threshold to a notch higher grade service as the hardware and/or software (distro) gets updated. We really like our old time users, hence we do that along with the magnificent bonus storage quota. Bonus storage quota is about 23.44% of the actually allocated storage capacity for our users, on average. In other words, average user gets 123.44% of advertised storage capacity. Old time users can have multiple times the original storage capacity.

We also have new service launches coming this Q4/2022, hopefully all the effort needed to cut electricity consumption does not derail or further delay these plans.


You can discuss this at our Discord, link (valid for 7days): https://discord.gg/buTTbezZ

]]>
<![CDATA[Extreme high electrical prices in Finland, at times over 1€/KWh]]> https://pulsedmedia.com/clients/index.php/announcements/558 https://pulsedmedia.com/clients/index.php/announcements/558 Wed, 10 Aug 2022 17:47:00 +0000
Some nuclear plants are under maintenance, low solar power combined with low wind power is causing quite a bit of turbulence.

This has not yet affected us greatly, until the last month. We received a near 100% electrical rate price increase at our Helsinki DC. This was before these record high price moments.

We are not yet adding this to our pricing, we were expecting this already for nearly 6 months.
Let's hope the energy market pricing stabilizes soon.

Finland is still coping much better than some other european countries however, see this 7 day chart:
Finland nord pool spot pricing over 7 days]]>
<![CDATA[Warning; Using Microsoft email services such as Hotmail or Outlook is again dropping e-mail silently]]> https://pulsedmedia.com/clients/index.php/announcements/557 https://pulsedmedia.com/clients/index.php/announcements/557 Tue, 26 Jul 2022 12:19:00 +0000
Microsoft e-mail services tells our MTA that they accepted and forwarded to recipient our e-mail, but they truly just silently drop it. Hence breaking RFC standards.

We are not the only ones by far with this issue, for example PyPi issue here: https://github.com/pypa/pypi-support/issues/271



Solution: Move to using a reputable reliable e-mail service provider, none of the usual big ones since all of them do shenanigans like this and likes to spy on your e-mail. There are plenty others to choose from, such as tutanota, proton etc. Yahoo and Yandex we've not seen this happen, so they might be possible from big operators, but check their privacy policy first if they are allowed to read your email.

You can use subaccount on your billing profile to add the e-mail yourself, create a new billing profile and have services transferred over, or with strong security check (multiple factors) we can change your main e-mail address as well.]]>
<![CDATA[LT1 top of the rack switch issues, ZEN MiniDedis / M1000 SSD series affected [FIXED]]]> https://pulsedmedia.com/clients/index.php/announcements/556 https://pulsedmedia.com/clients/index.php/announcements/556 Wed, 13 Jul 2022 09:53:00 +0000
The switch has been having intermittent connection issues, packets being dropped severely.

All metrics look good however, optics are intact, next level aggregation switch reports links OK etc.


RESOLVED:  Issue has been fixed and reason found was a configuration error. lt1 TOR switch had "half duplex" like behavior as well, and fixing config error on another switch resolved that as well. We will continue to monitor the situation, but currently testing is showing this has been resolved now.]]>
<![CDATA[EUR-USD Exchange rate met parity today, affecting customers with USD billing accounts]]> https://pulsedmedia.com/clients/index.php/announcements/555 https://pulsedmedia.com/clients/index.php/announcements/555 Tue, 12 Jul 2022 13:48:00 +0000 EUR is down from approx 1.188USD per EUR to  roughly 1.0 now.

That is really a dramatic change over the course of 1 year.]]>
<![CDATA[Helsinki datacenter electrical issue: RESOLVED. Update: 07:26AM]]> https://pulsedmedia.com/clients/index.php/announcements/554 https://pulsedmedia.com/clients/index.php/announcements/554 Mon, 27 Jun 2022 01:25:00 +0000 There was an electrical issue at the building electrical cabinet. Took a little bit of time for electrical contractor to arrive during Holiday sunday-monday night.
At roughly 01:45AM local time one of the phases went down on one of the electrical feeds we have. This caused a cascade on cooling side since 3-phase chillers need all 3-phases.
The issue was quickly isolated and found by roughly 02:30AM and building staff on alert was notified. Sometime around 05:10AM servers started to come back online.

Root cause has been isolated and steps are being taken to ensure this does not happen again.

The issue has now been fixed, and almost all hardware is up and running.


There are a couple of servers still which needs more attention but might take an day or two extra, as well staff is exhausted after getting an middle of night alert.
The servers known with remaining issues are: sentnel, walton, greenwood, dumper

]]>
<![CDATA[LT5 rack uplinks fixed, capacity increased]]> https://pulsedmedia.com/clients/index.php/announcements/553 https://pulsedmedia.com/clients/index.php/announcements/553 Thu, 23 Jun 2022 10:15:00 +0000
This was noticed as during the past couple days the rack has hit maximum usage for the first time.
This is now fixed and there is much more uplink capacity on the rack.]]>
<![CDATA[NEW Ryzen SSD seedboxes with HUGE traffic ratio released!]]> https://pulsedmedia.com/clients/index.php/announcements/552 https://pulsedmedia.com/clients/index.php/announcements/552 Sat, 18 Jun 2022 17:14:00 +0000
It offers all AMD Ryzen CPU servers with big traffic ratios! We even managed to keep the users per server amounts quite decent.

Check it all out at: https://pulsedmedia.com/m1000-ssd-seedbox.php]]>
<![CDATA[Network maintenance today]]> https://pulsedmedia.com/clients/index.php/announcements/551 https://pulsedmedia.com/clients/index.php/announcements/551 Tue, 31 May 2022 11:12:00 +0000 <![CDATA[Maintenance on some SSD nodes and ZEN MiniDedis [FINISHED]]]> https://pulsedmedia.com/clients/index.php/announcements/550 https://pulsedmedia.com/clients/index.php/announcements/550 Tue, 10 May 2022 11:41:00 +0000
Maximum of 8 units only should be affected, and expected downtime is in the 5-15minute range.]]>
<![CDATA[Significant number of servers down on Helsinki DC]]> https://pulsedmedia.com/clients/index.php/announcements/549 https://pulsedmedia.com/clients/index.php/announcements/549 Tue, 26 Apr 2022 19:06:00 +0000
Staff is enroute to the DC to check what is going on.

UPDATE 1: AC Unit had failed, which had caused some servers to go full power on fans, which in turn caused couple single rack single phases to trip, causing switches to be down, causing larger number to fail. Extra AC unit has been powered on, and repair for the failed AC unit has been ordered. 2 more similarly sized AC units were already in order to increase redundancy margin. All servers are back online now.]]>
<![CDATA[Support response times currently prolonged]]> https://pulsedmedia.com/clients/index.php/announcements/548 https://pulsedmedia.com/clients/index.php/announcements/548 Mon, 11 Apr 2022 09:40:00 +0000 <![CDATA[Potential information leak issue disclosed and fixed, being rolled out as we speak. Expect new software glitches.]]> https://pulsedmedia.com/clients/index.php/announcements/547 https://pulsedmedia.com/clients/index.php/announcements/547 Sat, 12 Mar 2022 16:27:00 +0000 Somethings are almost guaranteed to break, and somethings will become harder to manage.
These changes also allows server abusers to get away much easier.

Someone publicized a small potential attack vector for information leaks on user files. These settings were present for a long time, and initially required for SSH authorized keys to function.
To exploit this you need to know filename, and the file's permissions have to be set for OTHER read or write. Some applications may default to these permissions.
This has been fixed now, but it is possible some things will break as a  result.

Second is that process names can sometimes reveal another user IP address. Further, seeing running processes made usernames easily visible for attempting the above.
Hiding processes does not explicitly disallow seeing other people's usernames, but does stop ProFTPD from leaking user IPs.
This will also likely cause all kinds of management headaches, and will also allow server abusers to hide just a little bit longer.

Further, this forces us to skip normal rolling updates and quality assurance software wise. Also might force us to spend the next 8 weeks updating every older server manually. We are investigating can all these changes be implemented without upheaving thousands of users all of sudden.

You can see changes at: https://wiki.pulsedmedia.com/index.php/PM_Software_Stack

**UPDATE** This got fixed in less than 2hrs of information reaching us, and in ~3hrs of public disclosure.


]]>
<![CDATA[Rack LE6 - 1phase is down **Fixed**]]> https://pulsedmedia.com/clients/index.php/announcements/546 https://pulsedmedia.com/clients/index.php/announcements/546 Fri, 11 Mar 2022 09:24:00 +0000 Some servers are down, others are up. Looks like electrical issue.

On-site intervention underway.

UPDATE 14:03 11/03/2022: Issue fixed. A switch PSU had failed and caused fuse for 1 phase to trip. Approx usage was ~3.3A on a 16A circuit. Dual PSU switch so rest of the rack remained online.]]>
<![CDATA[Issues with one top of the rack switch, approximately 40 nodes affected [FIXED]]]> https://pulsedmedia.com/clients/index.php/announcements/545 https://pulsedmedia.com/clients/index.php/announcements/545 Wed, 02 Mar 2022 10:23:00 +0000
Expect very high packet loss on nodes through this switch.
Approximately 40 nodes affected, 1Gbps mostly M1000 series, some V1000 series

UPDATE: Switch replacement expected for roughly 19:00 GMT +2
UPDATE 2: Was just fiber module issues, everything looked fine, but just humongous packet loss. Failed fiber module replaced.]]>
<![CDATA[Community guides/scripts for: Sonarr, Radarr, Prowlarr, Cloudplow, Sabnzbd, Jellyfin, NZBGet, ZNC, Jackett]]> https://pulsedmedia.com/clients/index.php/announcements/544 https://pulsedmedia.com/clients/index.php/announcements/544 Sun, 13 Feb 2022 18:28:00 +0000
We have not tested these ourselves yet, so your mileage may vary.

People are now collating these and we will then either incorporate these as documentation OR use them as basis to finally integrate all of these.

See:
https://lowendtalk.com/discussion/177200/pulsedmedia-seedbox-giveaway-details-inside#latest
https://old.reddit.com/r/seedboxes/comments/srgmmi/arr_app_installation_script_for_pulsedmedia/
https://old.reddit.com/r/seedboxes/comments/sqoory/app_install_scripts_or_guides_for_pulsedmedia/]]>
<![CDATA[Automatic use of credit on open invoices]]> https://pulsedmedia.com/clients/index.php/announcements/543 https://pulsedmedia.com/clients/index.php/announcements/543 Thu, 10 Feb 2022 14:28:00 +0000
During this 1month we are listening for feedback from all users to either direction.
This would help a great deal of users, but for volume & reseller users it might cause some fuss with the need to submit cancel requests early on. Which is the original reason why it was disabled, many resellers were having issues with forgetting to cancel services they did not intend to keep.

However since we are primarily business to customer (ie. end user), our thought has been that we should primarily focus on user experience for individual users in these kind of matters.

If you have any opinions or feedback of this, please open a ticket and have your voice heard.]]>
<![CDATA[SSD Seedboxes restocked]]> https://pulsedmedia.com/clients/index.php/announcements/542 https://pulsedmedia.com/clients/index.php/announcements/542 Sun, 23 Jan 2022 11:54:00 +0000
Check them out at: https://pulsedmedia.com/ssd-seedbox.php]]>
<![CDATA[Changes in Bonus storage for seedboxes]]> https://pulsedmedia.com/clients/index.php/announcements/541 https://pulsedmedia.com/clients/index.php/announcements/541 Thu, 20 Jan 2022 12:34:00 +0000
  • Increased maximum bonus to 300% from 150%
  • Changed bonus % gain per euro paid from 28€ to 50€
  • Increased global max limit of bonuses in GiB gain per day
  • Extra ~250 000GiB or ~0.25PiB will be added for users over the next week
  • Bonus is given if server has at least 3000GiB of actual storage free, not based on allocatable / new user provisionable anymore -- this will ensure much more even distribution
These changes have taken effect immediately as of today.

Yes, there are some users who are right now at the new maximum of 300% level. Or would even go significantly over the 300% maximum bonus. Even after just few years of using our services.

Euros paid had to be increased, it has an multiplier effect and inflation has struck at high levels past few years. This was last changed in 2016 - Currencies devalue overtime always. 1€ does not buy what it used to buy in 2016, especially after the past 2 years of very high inflation.
Importantly,euros paid metric also has multiplier effect. It effects all your services the same, whether 1 or 100. If you have 100 services, each one gets euros paid bonus storage the same.

With these changes bonus storage accounts for nearly 25% of all storage accounted. Yes, that means average user gets almost 25% of bonus storage on their services.

For further detail, and change history see our wiki: https://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy

]]>
<![CDATA[Helsinki power outage [UPDATE 2]]]> https://pulsedmedia.com/clients/index.php/announcements/540 https://pulsedmedia.com/clients/index.php/announcements/540 Mon, 17 Jan 2022 12:16:00 +0000 Our UPS units apparently ran out of power as well, and while our router is responding nothing beyond that is responding.

Technician will be on site within ~15minutes

Update 1: UPS unit failed causing most of network being down despite dual power supplies. This was one of the cases where an UPS actually LOWERS reliability as the UPS unit itself failed to swap over to bypass once power was back, and failed to go back online, disabling all outputs. We are using EATON PowerWare units.

We are checking all servers under our own monitoring are online. If you have dedicated server and it's not back online now, please open a ticket.

Update 2: All but 19 seedbox servers are back online.

]]>
<![CDATA[Twelve99/Telia re-routing around]]> https://pulsedmedia.com/clients/index.php/announcements/539 https://pulsedmedia.com/clients/index.php/announcements/539 Tue, 04 Jan 2022 12:23:00 +0000
If you are still having throughput issues, or dropping connections after roughly 24hrs please contact support with your public IP and we'll check. It helps if you can do MTR both directions as well.

We will then manually check the particular network you are on and if routing via Level3 or RETN would help.

]]>
<![CDATA[Twelve99/Telia routed around largely -- Network issue should be resolved for most]]> https://pulsedmedia.com/clients/index.php/announcements/538 https://pulsedmedia.com/clients/index.php/announcements/538 Thu, 16 Dec 2021 20:23:00 +0000 As announced earlier at https://pulsedmedia.com/clients/announcements.php?id=537 Twelve99/Telia has been having really bad congestion.
They also shape certain IP networks.

Most upstream connections has been now routed away from Telia, but those with only route via Telia still has to remain -- atleast for now.

If you still have issues, please open a ticket with precise info of when it began, how much lower throughput are you getting, and preferrably your public IP address.
That will add to our data and helps with the case on Twelve99 as they seem to be clueless their network is congested.

There are a lot of companies in northern europe currently having this issue, it is not isolated to just us or just Finland. Twelve99 whole network seems to be badly congested, and sometimes the routings are just very peculiar as well.

]]>
<![CDATA[Issue on Twelve99/Telia network being congested]]> https://pulsedmedia.com/clients/index.php/announcements/537 https://pulsedmedia.com/clients/index.php/announcements/537 Thu, 16 Dec 2021 12:58:00 +0000
We do not normally have upstream routes via Twelve99/Telia, these reverted back a few days ago and are being fixed.

Further, Twelve99 has general widespread congestion issues they are working on.
Their network is a total mess, they don't seem to even know their own network at all and apparently unable to see the congestion.

On top of the congestion, Twelve99 has started throttling specific AS (Autonomous System), or in other words networks. They occasionally have been throttling traffic from our network as well.
Unfortunately, sometimes it is unavoidable to go through Twelve99 network, for example Telia broadband customers go through Twelve99.]]>
<![CDATA[Filemanager file download fix]]> https://pulsedmedia.com/clients/index.php/announcements/536 https://pulsedmedia.com/clients/index.php/announcements/536 Mon, 29 Nov 2021 16:07:00 +0000 Downloading very very large files via filemanager could essentially hang the process behind.
This was due to excessive buffering.

This has been now fixed and is being rolled out to servers.

At the same time we optimized the filemanager download routines a little bit.

If you use that feature routinely, please contact support to get your server and account settings updated asap.
Otherwise, typical rolling update schedule.

]]>
<![CDATA[Payment method IBAN Wiretransfers removed]]> https://pulsedmedia.com/clients/index.php/announcements/535 https://pulsedmedia.com/clients/index.php/announcements/535 Fri, 15 Oct 2021 11:03:00 +0000
This is due to users always opting for any other alternative where ever possible, leading to very low volume and more administration overhead at our side than the extremely low usage is worth.

If you are one of the very rare users using IBAN wiretransfers you can still use that until the end of the month, but also open a ticket you made a payment.
After end of this month we will not accept any IBAN wiretransfers anymore without prior approval.]]>
<![CDATA[Rclone webui integrated, new version rclone, syncthing, filebot etc.]]> https://pulsedmedia.com/clients/index.php/announcements/534 https://pulsedmedia.com/clients/index.php/announcements/534 Tue, 28 Sept 2021 19:11:00 +0000
Along that we also added rclone GUI integration for it's webinterface.

You can see changelog at https://wiki.pulsedmedia.com/index.php/PM_Software_Stack#Changes_2021

Rolling update as usual. Contact support if you want your server ahead on the queue]]>
<![CDATA[Rack switch failure [UPDATE: 23:02 -- RESOLVED]]]> https://pulsedmedia.com/clients/index.php/announcements/533 https://pulsedmedia.com/clients/index.php/announcements/533 Sun, 22 Aug 2021 20:38:00 +0000 One top of the rack switch has failed causing several dozen servers to be down at the moment.

Staff is en route to datacenter.

ETA for full resolution 3hours.

UPDATE 23:02 - Resolved, all servers back online. A Rittal PDU had failed. This is a known issue with this model of PDU, their fusing over time simply fails and cannot be reset anymore. Failure rate is low enough however not to outright replace all units in already in use.

]]>
<![CDATA[Big power outage in Helsinki [UPDATE 12:48 - RESOLVED]]]> https://pulsedmedia.com/clients/index.php/announcements/532 https://pulsedmedia.com/clients/index.php/announcements/532 Sat, 21 Aug 2021 08:00:00 +0000
This has caused several backend infrastructure servers and production servers to go down and into "halted" state where not even remote management responds.
Staff en route to datacenter.

Update 10:38 - Some of the infrastructure servers brought back online -- getting servers hard rebooted.
Update 10:59 - Certain types of servers essentially all had crashed into a state requiring physical power cycling. Waiting for servers to reboot, complete their filesystem checks etc. before we start checking servers one by one.
Update 11:20 - Almost all production servers back online, only a few remains.
Update 12:48 - Resolved. All production servers we can monitor are online. If your service is still down contact support. On dedis make sure that your server responds to ICMP Echo]]>
<![CDATA[Bandwidth upgrades]]> https://pulsedmedia.com/clients/index.php/announcements/531 https://pulsedmedia.com/clients/index.php/announcements/531 Tue, 15 Jun 2021 20:53:00 +0000
Multiple racks got uplinks doubled today.

We do not expect to see a dramatic difference in total bandwidth utilization, but on some rare cases this might stabilize throughput speeds for people with really long round trip latency, ie. far away from Finland.]]>
<![CDATA[One aggregation switch is having issues]]> https://pulsedmedia.com/clients/index.php/announcements/530 https://pulsedmedia.com/clients/index.php/announcements/530 Mon, 14 Jun 2021 17:50:00 +0000
This aggregation switch was already planned to be removed regardless, and only a few next tier switches were still behind this aggregation layer.

Intervention is scheduled for tomorrow, and expected to be fixed within 24hours.
In the meantime, affected servers network still functions but you will experience lower throughput speeds.]]>
<![CDATA[One 10G Segment switch down]]> https://pulsedmedia.com/clients/index.php/announcements/529 https://pulsedmedia.com/clients/index.php/announcements/529 Sun, 13 Jun 2021 07:01:00 +0000
This is a new type of crash, it still responds just enough not to trigger typical test alerts, monitoring still shows it as being online. But alas it does not respond to management and servers under it does not respond. Well the servers respond randomly just enough not to trigger alerts.

It is very rare for a switch to crash, let alone in this odd manner.]]>
<![CDATA[Value1000: Terrific Value For Chia Plot Storage]]> https://pulsedmedia.com/clients/index.php/announcements/528 https://pulsedmedia.com/clients/index.php/announcements/528 Wed, 19 May 2021 19:33:00 +0000 Value1000 Series Can Be Very Good Value For Chia Plot Storage

Value1000 L is 8TB at 16.99€ a month, with 2+ year discount of 15% this already drops to 14.44€/Month or ~1.81€/TB/Month.

Add to that bonus disk storage you may get for service lifetime or euros paid: http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy
and then potential volume discounts https://pulsedmedia.com/resellers.php

You might end up with a situation where you are getting upto 20TB for just 11.55€ a month with maximum volume discount. Even without any bonus disk this would be 1.44€/TB/Month.

]]>
<![CDATA[Storage servers & Storage CS2100 Availability: Still under 4 weeks]]> https://pulsedmedia.com/clients/index.php/announcements/527 https://pulsedmedia.com/clients/index.php/announcements/527 Thu, 13 May 2021 21:51:00 +0000 We have sold a lot of storage servers over the past few weeks.

We still have availability for the CS2100 96TB in less than the normal 4 week delivery schedule for the time being.
Please contact sales if you want one.

We still might even be able to fullfill the 12x16TB / 192TB CS2100 orders at current pricing for 6+ month prepaid orders.
Ask sales if we are still able to get close enough pricing for new drives.


** CS2100 Storage upgrades:
For additional storage we have several JBOD units available as well. A few Supermicro 45bay units, and a few older 16bay units.
Ask sales for options.


** Deliveries
Servers will be delivered mostly on a FIFO & prepayment basis, so earlier you get your order in the earlier you will get the unit.
Longer term the prepayment is the higher priority we give to your order, for example 3month prepayments will be delivered mostly prior to month to month orders.

Mostly we have exhausted our inventory at this point, hence multiple weeks of wait as we get parts in.


** Long  term availability:

We have already several petabytes of storage on order, and few racks worth of storage servers. Many of these will arrive during the summer & fall, and all of those servers should be delivered by end of Q4.

Fast availability for hardware fluctuates constantly, but we will do our best to get deliveries made each and every week. Including going for "storage hunts", visiting every local store which might have drives for pickup.

We are trying to buffer out the variability in costs as well, and we are doing our best to keep pricing the same. However, do mention during order if you want fast delivery (at potentially higher cost per unit) or are you able to wait longer as our hardware purchases get in.
Please also speficy if SMR or 5400rpm drives are OK for you (ie. Chia plot storage).

]]>
<![CDATA[Seedboxes as Chia farm storage -- Chia plotter servers coming]]> https://pulsedmedia.com/clients/index.php/announcements/526 https://pulsedmedia.com/clients/index.php/announcements/526 Fri, 07 May 2021 18:32:00 +0000
This is quite simple to do, simply use SSHFS to mount the seedbox.
Have one mount for harvester, another for moving data. This dual connection ensures that while filling up the seedbox with your Chia plots harvester has unhindered fast access to the plots.

Chia plotter servers

We should have availability for chia plotter servers soon.
First version uses Ryzen 5900X 12C/24T, 128GB Ram and 8x SATA SSD.

We will release the pricing and specs as soon as possible and make them available for preorder.

Please note that there is 2 weeks of backlog and we might need to restrict orders as well.

NVMe version(s) will become available several weeks after that.

Chia farm storage servers

You can use our storage server offers as farm storage, but more cost optimized "min/max" storage servers will become available soon.]]>
<![CDATA[ERR_TOO_MANY_RETRIES: Chrome & Opera login issues caused by LastPass extension]]> https://pulsedmedia.com/clients/index.php/announcements/525 https://pulsedmedia.com/clients/index.php/announcements/525 Sun, 18 Apr 2021 12:24:00 +0000
Disabling this plugin will stop this error from happening.
You might also want to report this to LastPass so they can fix the bug.

There are alternatives as well, such as Keepass which stores everything locally, but does not tightly integrate into browser.]]>
<![CDATA[Access Issues? Don't use chrome]]> https://pulsedmedia.com/clients/index.php/announcements/524 https://pulsedmedia.com/clients/index.php/announcements/524 Sat, 10 Apr 2021 19:58:00 +0000 Chrome's latest update has brought somekind of issue within the browser itself.

If you are having access issues 'This site can't be reached" with error code "ERR_TOO_MANY_RETRIES" and using Chrome, then try any other browser.

We are not sure what causes this, but it looks clearly like a Chrome bug -- and hopefully they solve that issue shortly.

]]>
<![CDATA[Jellyfin install]]> https://pulsedmedia.com/clients/index.php/announcements/523 https://pulsedmedia.com/clients/index.php/announcements/523 Tue, 06 Apr 2021 20:18:00 +0000 definitelyliamon LET shared with fellow users a basic instructions how to setup Jellyfin:
  1. Download the .NET Core binaries: https://dotnet.microsoft.com/download/dotnet/thank-you/runtime-aspnetcore-5.0.4-linux-x64-binaries and extract it to $HOME/dotnet
  2. Add it to your Path
  3. Download the combined portal version of Jellyfin: https://repo.jellyfin.org/releases/server/portable/stable/
  4. Extract Jellyfin and run the jellyfin binary inside.

Run it once, set it up and go to the Networking tab to change the port. Then restart Jellyfin.

]]>
<![CDATA[Debian 8 certbot no longer supported: Some servers SSL cannot be renewed]]> https://pulsedmedia.com/clients/index.php/announcements/522 https://pulsedmedia.com/clients/index.php/announcements/522 Mon, 29 Mar 2021 13:48:00 +0000 Certbot has decided to remove Debian 8 support altogether.

Since bulk of our servers are still Debian 8 we cannot renew certifcates on some of the servers. Somehow in some cases it wants to re-install certbot.
There probably is a workaround for this, we tried several with no luck.

This looks a lot like typical Python issues as well, when attempting some other workaround by manually installing Certbot.

We might have to start moving people to Debian 10 based servers en masse rather soon, that's a daunting task. Simple dist-upgrade / full-upgrade is too risky option for production servers.

If you have workaround for this, please let us know.

Otherwise we will begin earlier than scheduled server migrations based on which servers certbot cannot function anymore.

]]>
<![CDATA[Helsinki DC outage (situation over)]]> https://pulsedmedia.com/clients/index.php/announcements/521 https://pulsedmedia.com/clients/index.php/announcements/521 Fri, 19 Mar 2021 22:41:00 +0000 Helsinki DC has an outgoing outage, and we are investigating the situation right now.

We will update this announcement as soon as we know more.

**UPDATE 1** Transformer maintenance on the premises. Premises failed to inform us about this despite this was known from last fall. Staff on site, we are working to see if we can restore power to a number of servers. Outage from premises manager was told to be maximum 10 hours. After power is restored it will take a moment to get everything back online again and for the UPS batteries to recharge.

**UPDATE 2 Saturday 00:50** Premises manager failed to notify us of this. The transformer maintenance began at 23:00 Friday Night, and is supposed to last maximum of 10 hours. Since they did not notify us we could not arrange a generator on site either, and being weekend night it's quite hard to get a generator on site either, it should've been arranged no later than 18:00 Friday. They dropped the ball completely on this, other datacenters on the same premises were notified but not us. These premises contain a lot of datacenters of many sizes and from companies of many nationalities, to best of our knowledge we are the only ones who have not been informed about this. Rest assured, some very serious chats with the premises manager will ensue from this.

No battery backup (currently) in the world can last for 10 hours for a datacenter. So once our batteries ran out, so did the servers.

We have staff on site, and getting well caffeinated for checking that everything gets back online normally. Once we either get power from someone else's generator or our own generator somehow miraculously on the site we will start restoring services as quickly as possible. We estimate that most servers will come back online without a hitch, but there will likely be a few last services lingering until saturday night for full restoration.

**UPDATE 3 Saturday 01:11** We got the word that power might be restored as quickly as just 1-1½ hours.

**UPDATE 4 Saturday 01:59** Partial power restored. Routing restored

**UPDATE 5 Saturday 02:18** All power restored, we are now checking all servers startup as expected. Battery banks are recharging.

**UPDATE 6 Saturday 02:47** Almost all servers back online. A few remain to get back online, less than 50 now.

**UPDATE 7 Saturday 04:36** Every single server should be up and running. Please open a ticket if you are still encountering issues.

]]>
<![CDATA[Wiki is recoverable, one of the NS servers is not recoverable]]> https://pulsedmedia.com/clients/index.php/announcements/520 https://pulsedmedia.com/clients/index.php/announcements/520 Sat, 13 Mar 2021 18:53:00 +0000
One of the DNS servers was not recoverable -- we will move to replace it. DNS was never a worry because we have distributed our DNS over many providers.]]>
<![CDATA[OVH SBG Fire, photos, videos etc.]]> https://pulsedmedia.com/clients/index.php/announcements/519 https://pulsedmedia.com/clients/index.php/announcements/519 Sat, 13 Mar 2021 10:54:00 +0000 https://www.datacenterknowledge.com/uptime/fire-has-destroyed-ovh-s-strasbourg-data-center-sbg2

Good article with more information about it.

Process to restore remaining DCs are operational has begun but will take weeks.

This does not affect Pulsed Media much. Very little of our services were affected, such as wiki, one dns server, small number of customer VMs. We have our own datacenter in Finland, and this is a good reminder about firesafety!]]>
<![CDATA[OVH had a fire in their SBG Facility. Our Wiki is down.]]> https://pulsedmedia.com/clients/index.php/announcements/518 https://pulsedmedia.com/clients/index.php/announcements/518 Wed, 10 Mar 2021 10:11:00 +0000 https://twitter.com/olesovhcom/status/1369478732247932929?s=21

They had a fire over there. SBG2 destroyed, part of SBG1 also affected.
SBG1 to SBG4 are all down due to this.

Our services affected: Wiki, 1x nameserver, 1x mail server.
Wiki is down, and we will wait for them to recover the site before hurrying to replace.

]]>
<![CDATA[Network Congestion resolved and sales resumed]]> https://pulsedmedia.com/clients/index.php/announcements/516 https://pulsedmedia.com/clients/index.php/announcements/516 Thu, 11 Feb 2021 21:06:00 +0000
Sorry for the inconvenience.]]>
<![CDATA[Network congestion -- and limited sales of higher speed services]]> https://pulsedmedia.com/clients/index.php/announcements/515 https://pulsedmedia.com/clients/index.php/announcements/515 Mon, 01 Feb 2021 12:17:00 +0000 Due to network congestion we had to do the very hard choice of limiting sales of both Dragon-R and M10G series for this time.

Transit upgrades have been ordered many months ago, and we are still waiting for the deployment for the additional transit capacity. Essentially our transit capacity 95th % usage percentile is now day after day above 85%.
Our plan is to increase transit capacity by roughly 70% in a single a go.

During the past year the amount of network capacity being utilized has dramatically increased for various reasons, so despite this upgrade process was already started now roughly 8 months ago we have essentially got our links bottlenecked.
Rest assured, we are doing everything we can to hurry the transit upgrades.

Until that happens we are saddened to say that we must limit Dragon-R and M10G sales due.

We hope to have this situation fixed within couple of weeks.

]]>
<![CDATA[Network maintenance completed with minimal downtime]]> https://pulsedmedia.com/clients/index.php/announcements/514 https://pulsedmedia.com/clients/index.php/announcements/514 Fri, 15 Jan 2021 20:34:00 +0000 The change over caused roughly 15 to 20 seconds of downtime, and has been now fully completed.]]> <![CDATA[Network maintenance today]]> https://pulsedmedia.com/clients/index.php/announcements/513 https://pulsedmedia.com/clients/index.php/announcements/513 Fri, 15 Jan 2021 15:31:00 +0000
Hopefully any potential interruption remains in a few minutes, but please allow upto 30minutes.]]>
<![CDATA[Limited number of Ethereum GPU mining servers available]]> https://pulsedmedia.com/clients/index.php/announcements/512 https://pulsedmedia.com/clients/index.php/announcements/512 Fri, 11 Dec 2020 20:05:00 +0000 We have several 6x RX 5700 GPU Ethereum mining systems available immediately.

These are already optimized for ETHash / ETCHash.

Servers will run latest SMOS, and have already been optimized for what those particular GPUs are capable of doing. Speed of these systems is 325MH/s +/- 3% generally, some go as high as 337MH/s and lowest current system is 316MH/s. Delivered system will be picked randomly.
We will setup to the rig to your chosen pool + wallet address but standard fee does not include SMOS access for hardware qualification and warranty reasons, as it is quite easy to brick an GPU with full access. Since these are manually setup, please try to keep pool changes to less than once a month.
Each server runs typically an AMD B350 or X370 motherboard with a ryzen CPU, and 1250Watt 80+ Gold or Platinum PSUs, in a good airflow rackmount chassis. A small number might be running a Intel B250 chipset + Celeron/Pentium CPU.

Service includes full hardware replacements and hw support on NBD+5 schedule just like any other dedicated server we offer, but unlike other dedicated servers we will maintain the software, configuration and monitoring for these to ensure maximum uptime for you.

Pricing is ONLY for annual prepayment due to the nature of these machines, but renewal costs less.

Immediate availability for 3 systems, 3 more available next week. RX5600 XT based systems coming available by end of January.

Pricing is as follows currently, but may change even daily due to stock availability for parts and is always based on current market pricing for the hardware. Price includes electricity and SMOS fees, you only pay for the server rental and it includes everything. This is an excellent choice for people who do not want to manage their own hardware and/or datacenter.

6x RX 5700 / RX5700 XT 320MH/s +/- 5% pricing:

  • Setup fee: 1000€, Renewal 2355,30€ annually. First payment: 3355,30€
  • Setup fee: 2000€, Renewal 2055,30€ annually. First payment: 4055,30€
  • Setup fee: 3000€, Renewal 1855,30€ annually. First payment: 4855,30€
  • Setup fee: 4000€, Renewal 1655,30€ annually. First payment: 5655,30€
  • Setup fee: 5000€, Renewal 1455,30€ annually. First Payment: 6455,30€ -2,5% discount on first year: 6293,92€


For your maths you should use 97.5% availability rating. GPU mining rigs sometimes have frequent reboots etc. software has to be updated suprisingly frequently etc. can cause downtimes. Many times we can also just swap your node to minimize downtime.]]>
<![CDATA[Changes to traffic limits: Minimum speed now 100Mbps aka "Guaranteed" speed, no limits on internal]]> https://pulsedmedia.com/clients/index.php/announcements/511 https://pulsedmedia.com/clients/index.php/announcements/511 Thu, 26 Nov 2020 14:07:00 +0000 Big Big changes on traffic limits!

We have changed how traffic limits work. In the past only we only limited torrent speeds via rTorrent, to a tiny fraction of the speed. FTP, SFTP, rclone etc. remained unlimited.

But as times change, so does needs as well. We wanted to offer a lot more services, and this limitation did not really suit that goal. Internal traffic is unlimited which can be hugely beneficial to many users. This has allowed us to add Deluge support as well.

So the new limits works on all protocols, all applications but only on external bandwidth. Internal (within our DC) bandwidth is unrestricted and you will always have the full server link speed at your disposal.

Further to make things better, now the minimum speed is 100Mbps, instead of 10Mbps like in the past. This means that even if you signup for a plan which 4TB traffic limit, you could use more than 30TB each month.
The traffic limit only represents the unrestricted full speed traffic, after that you get what some DCs call "guaranteed bandwidth".

Traffic counting is still rolling 30days. No preset full reset dates, but continuously rolling and checking the past 30days.]]>
<![CDATA[Having rTorrent halts in BW? Avoid public trackers. +503 issues +Deluge support +Traffic Limitation changes / Guaranteed 100Mbps always]]> https://pulsedmedia.com/clients/index.php/announcements/510 https://pulsedmedia.com/clients/index.php/announcements/510 Sun, 08 Nov 2020 11:24:00 +0000 If you are having issues with rTorrent just "not doing anything" occasionally, it is most likely due to using of public trackers.

Public trackers constantly come and go, and many public torrents have big big lists of trackers because most of them do not work any way.

Unfortunately, rTorrent is not well mutithreaded and this can cause a situation where rTorrent's other connectivity is essentially blocked -- nothing happens except GUI updates (the API/Connection is one of the  very few multithreaded things).

To solve it go through your torrents and remove dead trackers + restart rTorrent.
We have made an article about it long time ago as well: https://blog.pulsedmedia.com/2017/10/faster-public-torrents-with-pulsed-media-seedboxes/

We are currently working on logging downed trackers and to build a new filtering list. Automated scripts are ready as soon as we have a proper up-to-date list from the logs.
That script will clean up public torrents for known dead or problematic trackers.

We also released bounties on rTorrent fixes: https://wiki.pulsedmedia.com/index.php/Pulsed_Media#Development_Bounty_Program
And the traffic complete halts we've had open bounty for sometime: https://github.com/rakshasa/rtorrent/issues/999


503 issues

This too is caused by rTorrent most likely causing PHP-CGI / FCGI crashes frequently. This happened almost never in the past, so we did not have fault tolerance for checking if user php-cgi processes are running.
This has been fixed, and check frequency is now just 1 minute!

Only lead as to what is causing it is the combination of rTorrent update + ruTorrent update. Due to remote vulnerability with rTorrent 0.9.6 (which btw does not even compile on debian 10: https://github.com/rakshasa/rtorrent/issues/1041 ) we were forced to update in most haste to 0.9.8 roughly a year ago.


Deluge support is coming!

We made the basic work to start support Deluge soon. If you want to use Deluge you are now free to install it yourself as well and launch it.
Official support + potential GUI integration is coming soon.

Deluge support is here!

If you do not see buttons for deluge yet in your welcome page, please contact support to have this latest update put on your server.

Traffic Limitation Changed to ALL services!

Deluge support is afforted by moving to traffic limit on kernel / network stack level instead of application (rTorrent). rTorrent update once again _broke_ the API, and therefore traffic limits did not work at all. We made new systems just lately, and this update is on many servers now.

In future traffic limiting will be done on ALL protocols, including FTP and SFTP and even rclone/syncthing instead of just torrents. This also curbs bandwidth abuse for users which did not really use torrents, but were consistently above their traffic limits.

On the plus side, in the past the lowest bracket was only 10Mbps. Currently we are still allowing 100Mbps making every and each service in essence what many market as "Unmetered", "Unlimited" or "Guaranteed".

]]>
<![CDATA[Today's outage was due to electrical cabinet maintenance]]> https://pulsedmedia.com/clients/index.php/announcements/509 https://pulsedmedia.com/clients/index.php/announcements/509 Thu, 05 Nov 2020 12:36:00 +0000 We had a short outage on some systems, and some of our switches today.

We had to do some maintenance on one of the smaller electrical cabinets and we stress tested one of our UPS units, and recovery procedures.
Unfortunately not everything went as planned, and some switches and few servers were shutdown in the process.

Lessons were learned, and there will be some design changes to parts of our datacenter to increase power distribution fault tolerance and enhance recovery. Longest downtime was roughly 20 minutes on one of the switches, this was due to it's power supplies failing during reboot. It's a known issue with this switch model as their power supplies age.

Sorry for all of our customers who got affected by this.

]]>
<![CDATA[rTorrent performance issues or unresponsiveness?]]> https://pulsedmedia.com/clients/index.php/announcements/508 https://pulsedmedia.com/clients/index.php/announcements/508 Mon, 02 Nov 2020 19:04:00 +0000 If you are having traffic halts it may be due to udp trackers. Turns out UDP trackers can halt all traffic on latest rTorrent.
We recommend you remove those, you can use retrackers option via settings to do that.

Unresponsiveness: A single config option we had set high, expecting it to perform well, can cause even GUI to be unresponsive.

We have made several issues on github about this.

In either case, you can contact support.
We can remove all your UDP trackers + update your config. Updated config will roll out in any case over time.
The udp tracker removal script we are still considering if we shall roll it out to everyone etc.

It will take several days regardless of what we decide with that.

You can follow rTorrent development at: https://github.com/rakshasa/rtorrent/issues/
We have released several bounties for bug fixes.

]]>
<![CDATA[Dragon-R restocked]]> https://pulsedmedia.com/clients/index.php/announcements/507 https://pulsedmedia.com/clients/index.php/announcements/507 Fri, 09 Oct 2020 20:30:00 +0000 Get your brand new fresh Dragon-R service while stock remains available!

This new server model features even more extreme performance with 36 disk RAID10 arrays instead of 24 disk array.]]>
<![CDATA[Dragon-R restock status]]> https://pulsedmedia.com/clients/index.php/announcements/506 https://pulsedmedia.com/clients/index.php/announcements/506 Tue, 29 Sept 2020 13:22:00 +0000
Sorry for those having to wait for their opportunity to have a Dragon-R service.]]>
<![CDATA[New file manager]]> https://pulsedmedia.com/clients/index.php/announcements/505 https://pulsedmedia.com/clients/index.php/announcements/505 Sun, 13 Sept 2020 19:00:00 +0000
Normal rolling release, so it will be introduced to new servers over time slowly.

If you have feedback about the new filemanager, feel free to contact support.]]>
<![CDATA[Benefits of the Dragon-R Semi-guaranteed bonus disk]]> https://pulsedmedia.com/clients/index.php/announcements/504 https://pulsedmedia.com/clients/index.php/announcements/504 Sat, 05 Sept 2020 14:46:00 +0000 The average user on Dragon-R series servers has 45.21% of extra bonus disk space associated with their service.
That is quite a tremendous amount!

So an average Dragon-R Mushu with 3TB base disk is actually receiving, on average bonus disk, 4.35TB of high speed RAID10 storage on their service.
Not only that, but it means even the most entry level plan on the series gets a big fraction of an HDD just for themselves, along side the humongous RAID10 performance advantages.

You can read more about our bonus disk policy in our wiki, at https://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy

]]>
<![CDATA[Having frequent rTorrent / ruTorrent crashes? Please let us know]]> https://pulsedmedia.com/clients/index.php/announcements/503 https://pulsedmedia.com/clients/index.php/announcements/503 Fri, 04 Sept 2020 16:31:00 +0000
It might be related to this:  https://github.com/rakshasa/rtorrent/issues/999

Please let us know if you are having frequent issues, e-mail support@pulsedmedia.com with your registered email address and please give us as much details as possible.

]]>
<![CDATA[New Corona Relief Auction: Value1000 1TB Seedbox]]> https://pulsedmedia.com/clients/index.php/announcements/502 https://pulsedmedia.com/clients/index.php/announcements/502 Tue, 21 Jul 2020 13:17:00 +0000
This auction is available as usual at: https://pulsedmedia.com/seedbox-auctions.php

For offer is Value1000 1TB, 4000GiB traffic seedbox which has 1Gbps/1Gbps speeds on a high performance RAID0 disk array.]]>
<![CDATA[ruTorrent updated to V3.10]]> https://pulsedmedia.com/clients/index.php/announcements/501 https://pulsedmedia.com/clients/index.php/announcements/501 Mon, 20 Jul 2020 14:39:00 +0000
Further we are again auto-updating user ruTorrent versions to this.
This update has already been pushed to few dozen servers.

If you want to get the update faster than typical slow roll out, please open a ticket and request your server is updated already.]]>
<![CDATA[Dragon-R Restock approaching]]> https://pulsedmedia.com/clients/index.php/announcements/500 https://pulsedmedia.com/clients/index.php/announcements/500 Fri, 17 Jul 2020 14:20:00 +0000
Hardware has already been received from our vendors already, only setting up, testing and racking remains to be done.

Before end of July there might be an odd slot or two available however.]]>
<![CDATA[Development Bounty Program + a BIG bounty for ElectronJS development]]> https://pulsedmedia.com/clients/index.php/announcements/499 https://pulsedmedia.com/clients/index.php/announcements/499 Fri, 03 Jul 2020 19:22:00 +0000 We have started a development bounty program!

This program is for software development which benefits all users of Pulsed Media services. Current bounties are for rTorrent enhancements which is core functionality for our services, but we will expand this further over time. We are going to spend up to 1500€ monthly on these bounties (upto 3000€ as service credits!).

More details and open bounties: https://wiki.pulsedmedia.com/index.php/Pulsed_Media#Development_Bounty_Program

Big Bounty: ElectronJS development

Adobe AIR is getting a bit long in the tooth for our pulsedBox application, in the way that Adobe AIR did not get much mainstream adoption and is quite niche. It was designed for Flash (!!!) and Flash is now at the point Adobe themselves recommends uninstalling it. ElectronJS is quite a bit more mainstream, quite a few desktop applications are using it. Convert this application to ElectronJS and installers for Windows, MacOS and Linux along with following enhancements (only 2 known bugs!):
Current pulsedBox application was developed something like 7 years ago, and has seen ZERO updates, since most of it just works. The basic philosophy why it just works is that it is 70% just a browser.

 * Reliably associate .torrent files with the OS (This has been hit and miss with the Adobe AIR application)
 * Failed login screen should be something else than blank page
 * Stretch 1: SFTP Mount the Seedbox as seemingly local drive (ie. Fuse SFTP mount)
 * Stretch 2: Integrate OpenVPN. Get config by fetching the config from the server once logged in.
 * Stretch 3: Multiple seedboxes managed from single instance of pulsedBox, must be quite seamless and easy to use

Bounty for the conversion with .torrent association reliability fixes is 500€, and each stretch goal adds 250€. Application needs to remain fully FOSS and we retain copyrights for this conversion. Needs to be fully documented for rolling out new installer packages.
You can obtain source by installing the pulsedBox package and looking into the directory, what makes this easier is that it is already HTML + JS. There may not be any degradations on the application, everything that is now included must function. There might be some server side change requirements, you can seek assistance from us directly to make those happen.

Maximum bounty with all stretch goals reached is 1250€ or 2500€ as service credit.
]]>
<![CDATA[Services status page]]> https://pulsedmedia.com/clients/index.php/announcements/498 https://pulsedmedia.com/clients/index.php/announcements/498 Sun, 21 Jun 2020 18:44:00 +0000
You can view the status page at: https://nodeping.com/reports/status/RIP4WW2JRY
This shows total uptime % of variety of backend services. Also select production services are also now on this services as well being additionally monitored on top of our regular systems.

We are still working to add more monitors as well so that the lapse in response time does not re-occur. Multitude alert ways and 3rd party service as additional layer should ensure that.]]>
<![CDATA[Downtime on a small number of servers [UPDATE 5]]]> https://pulsedmedia.com/clients/index.php/announcements/497 https://pulsedmedia.com/clients/index.php/announcements/497 Sat, 20 Jun 2020 13:12:00 +0000
Sometimes that causes a situation where remote reboot does not function, remote reboot still keeps standby power for chipset etc. which can cause the server to go in a state where it does not remote reboot as the power is not physically being hard cycled on and off. It is more akin to just resetting the CPU & RAM on many built-in remote management systems. It is interesting condition, but nothing we've not seen before. Resolution is simple, remove power cable, reinsert power cable to ensure all power is off.

Total servers down was 25, most of these are not rebooting via regular remote reboot.

It is also midsummer festivities holiday in Finland, so we've been skeleton staffed. This is a 4 day period where most people leave the cities to enjoy some quiet time.
Thus for some reason no alerts reached person responsible to be on emergency call. We will look into ensuring that kind of alerts reach in future.

We are right now scheduling physical on-site intervention on the remaining down servers.
Downtime began on 18th of June, ~04:35 GMT +2. Expected ETA for full recovery is 3hours, by  17:30 GMT+2

UPDATE 1: Brownout had happened ~05:00 GMT +2 as per UPS logs. One rittal power distribution module has failed as well and has been replaced. One switch was down but that had  been mostly been used for only management network, very few servers attached to it.

UPDATE 2: All servers back online and that failed PDU has been replaced.

UPDATE 3: We have added a new server monitoring solution by NodePing to avoid this kind of issue in future for faster response time, along with a new hiring for remote hands / holiday emergency on-site contact. His job is to remain at the local area during holidays ready to handle any possible alerts swiftly. Infrastructure status page is available at https://nodeping.com/reports/status/RIP4WW2JRY

Update 4: We hired new staff member as a result, and added another 3rd party monitoring solution so there won't be this long lapses in response times in future. Multiple methods of alerts to multiple people are to be used in future. Public status page is viewable and we will integrate it better soon. We have now multiple layers of monitoring, multiple alerting methods and the new staff member's #1 responsibility will be to stay at local area on alert during holidays, weekends etc. like this. Essentially, our monitoring failed to reach the right person swiftly. New monitoring should fix this. Lots of work to do with this remains however as we need to automate which servers are being monitored and add more monitors.

Another new layer of monitoring is still being planned, that will take a little bit of development work as well. That will add SMS alerts directly from our automation server for other kinds of data than simple uptime. Enviromentals and infrastructure is similarly already monitored (ie. we have ~dozen temperature monitors, several dozen wattage monitors)

Only 25 servers were down fortunately, bigger impact on remote management interfaces as the failed Rittal PDU caused one switch to be down (which is almost solely for management network). A few more was rebooted during fixing this.

We are really sorry for all of the people affected, and are applying SLA compensation swiftly. Total downtime for the affected servers were roughly 2days and 14hours.
We will strive to do better in future. It's not easy to build, maintain and monitor your own datacenter - it requires a lot of dedicated people and this time around during midsummer holiday our monitoring failed.

Update 5: as part of this MDS standard SLA level has been upgraded to Silver and Power and Power+ have Gold.


]]>
<![CDATA[Several Dozen servers down, issues with 1 rack [UPDATED]]]> https://pulsedmedia.com/clients/index.php/announcements/496 https://pulsedmedia.com/clients/index.php/announcements/496 Mon, 25 May 2020 21:38:00 +0000
Initially we assumed power distribution issue on one of the racks, which was partially correct. TOR switch for that rack had a failed PSU upon closer inspection.

That failed PSU has now been swapped over and the switch is back online, monitoring is reporting that all nodes are back online as well.

]]>
<![CDATA[DDOS Attack on our billing [UPDATE 5]]]> https://pulsedmedia.com/clients/index.php/announcements/495 https://pulsedmedia.com/clients/index.php/announcements/495 Mon, 04 May 2020 13:05:00 +0000 This time our billing system is being attacked via a DDOS intended for resource exhaustion.

Hence our billing system is working a little bit slower than usual right now, we are working on it.

First the spam attacks, now this. This type of thing always tends to happen when are running specials.

Current list of shame -- will update later, these are the first ones we have looked on and verified to have inordinate amounts of requests.
192.3.79.119
138.128.29.216
104.227.27.126
45.128.24.10
192.156.217.46
45.129.124.62
2.59.21.247
45.145.56.38
45.92.247.200
193.23.245.97
45.87.248.196

UPDATE -- LIST OF ALL MANUALLY CHECKED IPs

Once subnets based on this list were blocked it ended. The whois details are rather interesting and point to a competiting seedbox provider. This seedbox provider also has IPv4 renting business. Some are straight under them, some are on their personal names, and some are just a generic probably holding for the IPv4 addresses. Very few have rDNS setup, those which have looks like spammy or are related to a larger hosting business interestingly enough. Almost all of them are housed at Leaseweb, where this competitors production servers reside almost solely at. Quite frankly, everything seems to be pointing towards them.

138.128.29.216
104.227.27.126
45.128.24.10
192.156.217.46
45.129.124.62
2.59.21.247
45.145.56.38
45.92.247.200
193.23.245.97
45.87.248.196
45.152.196.223
45.135.36.199
193.8.215.249
85.209.130.146
85.209.130.39
193.8.215.216
193.8.215.71
193.8.56.151
193.8.94.149
193.8.94.54
194.33.29.91
195.158.192.129
209.127.127.61
209.127.146.215
2.56.101.185
2.56.101.207
2.56.101.59
2.56.101.87
45.129.124.208
45.130.255.148
45.130.255.224
45.130.255.23
45.131.212.253
45.131.213.163
45.131.213.87
45.134.187.218
45.134.187.47
45.135.36.164
45.135.36.4
45.135.36.66
45.136.228.18
45.136.231.69
45.137.60.50
45.137.63.215
45.137.80.60
45.142.28.19
45.142.28.250
45.142.28.49
45.142.28.97
45.146.89.167
45.146.89.230
45.146.89.53
45.152.208.53
45.154.244.124
45.154.244.185
45.154.56.161
45.154.56.222
45.86.15.135
45.86.15.169
45.86.15.93
45.92.247.188
45.92.247.20
45.92.247.70
45.94.47.34
84.21.188.10
85.209.129.204
85.209.130.208
104.144.10.98
104.227.145.106
138.128.40.171
144.168.216.145
185.126.66.222
185.164.56.13
185.99.96.182
185.99.96.244
192.166.153.103
192.186.151.162
192.198.103.180
193.23.253.234
193.23.253.6
193.23.253.66
193.8.138.137
193.8.138.22
193.8.231.195
45.130.60.12
45.130.60.168
45.130.60.229
45.130.60.248
45.131.212.253
45.86.15.137
45.86.15.98
45.87.248.214
45.87.248.222
45.87.248.58
84.21.188.4


UPDATE 2

Those behind this stopped promptly communicating with us once we pointed out they were behind this. The competiting provider is Rapid Seedbox. We were willing to give them the benefit of the doubt, but they stopped responding to any communication. Their initial response was unexpectedly swift and quick, but once it was obvious everything points to them -- zero responses.
You can verify this yourself by whois'ng some of the IPs, here are examples pointing directly to Rapid Seedbox: 45.145.56.38, 185.99.96.244, 185.164.56.13 and 185.126.66.222.

One of the owners of Rapid Seedbox is "[PRIVACY PROTECTION]", many of these subnets are owned by "[PRIVACY PROTECTION]". Googling this name yields many interesting abuse db results, with multiple claims of "hacking gmail", ie. probably bruteforcing passwords like on this attack.
Other owner is named only one of the subnets as i can see. [PRIVACY PROTECTION] is named on multiple ones.
Almost none have rDNS set as of few hours ago. Almost all IPs are hosted at Leaseweb, where Rapid Seedbox has their servers.

Giving them the benefit of the doubt we did not immediately release this, it could've been their whole network is compromised. However, if they stopped communicating and their supposedly 15 people full time staff neither responds, while they pride themselves on super fast help support response time -- It smells fishy.

Normal attack like this, done by botnet and compromised systems would not be so blatantly obvious. Ssomeone with access to a lot of subnets and can spin up on them a lot of VMs at will has to be done behind this. Normal attacks are typically globally spread etc. and typically all IPs have set reverse dns, most have major ISP IP ownership etc. But all of these are fake sounding names, mostly linked to this "greatworktogether" group, and few of the subnets with very similar style of information are clearly linked to Rapid Seedbox.

To best of our knowledge at this moment, no customer accounts were compromised.

We would like to be wrong, as we've just had lengthy negotiations if we would supply servers to them. First contact they asked if we would sell Pulsed Media to them, but that was a very firm no. We proceeded to converse if vice-versa was possible but did not find common ground. After which we started looking for opportunities to work together, by us supplying to them servers. That neither proceeded anywhere.

There is total of 5977 suspicious IPs but we have to make sure no legitimate user IPs ended up on that list before releasing this whole list.

All of the relevant and important subnets are on the above list. Here are the /24 subnets:
104.144.10.0/24
104.227.145.0/24
104.227.27.0/24
138.128.29.0/24
138.128.40.0/24
144.168.216.0/24
185.126.66.0/24
185.164.56.0/24
185.99.96.0/24
192.156.217.0/24
192.166.153.0/24
192.186.151.0/24
192.198.103.0/24
192.3.79.0/24
193.23.245.0/24
193.23.253.0/24
193.8.138.0/24
193.8.215.0/24
193.8.231.0/24
193.8.56.0/24
193.8.94.0/24
194.33.29.0/24
195.158.192.0/24
209.127.127.0/24
209.127.146.0/24
2.56.101.0/24
2.59.21.0/24
45.128.24.0/24
45.129.124.0/24
45.130.255.0/24
45.130.60.0/24
45.131.212.0/24
45.131.213.0/24
45.134.187.0/24
45.135.36.0/24
45.136.228.0/24
45.136.231.0/24
45.137.60.0/24
45.137.63.0/24
45.137.80.0/24
45.142.28.0/24
45.145.56.0/24
45.146.89.0/24
45.152.196.0/24
45.152.208.0/24
45.154.244.0/24
45.154.56.0/24
45.86.15.0/24
45.87.248.0/24
45.92.247.0/24
45.94.47.0/24
84.21.188.0/24
85.209.129.0/24
85.209.130.0/24

To block these subnets just add them to your firewall as is, or without /24 and subnet 255.255.255.0

We have not moved to block these IPs on our production servers as there could be actual seedbox end users on these subnets and we would not like to be punishing Rapid Seedbox's customers.

We do not know yet if there is relation to the spam attacks lately as well.

If you have any relevant information, please do contact us.

UPDATE 3: Got a response from Rapid Seedbox very soon after releasing last update. Investigation continues.

UPDATE 4:  Rapid Seedbox says all of these IPs are connected to single customer, and requested personal information for privacy to be removed from earlier update. These names has now been changed to [PRIVACY PROTECTION].

UPDATE 5: Rapid Seedbox response was that 1 customer was behind this, and they have disabled access to pulsedmedia.com on their proxies.  -- A proxy service? That's all the information they gave us.

Further, this attack had some email addresses correct, infact many accounts got failed login attempts from these. How did these attackers have any idea what addresses to try? By random chance it's very difficult to get even 1 hit by randomly trying, nevermind in the hundreds. A lot of them are fairly (think decade old) billing profiles too. That makes us think that the attacker has some seedbox niche email address database to try upon.

In any case, we are still going through the list of these targets, and will reset some people's billing profile passwords to be on the safe side. It's quite a bit of manual work as we do not want to just blanket reset everybody's, but manually look at each of the profiles rather and add them to a list so we may contact them if needed. We want to emphasize that we have no evidence of any compromised account so far. To best our knowledge from looking at the logs, zero accounts were compromised.

Make your own conclusions on this.

]]>
<![CDATA[Multiple spam attacks [UPDATE]]]> https://pulsedmedia.com/clients/index.php/announcements/494 https://pulsedmedia.com/clients/index.php/announcements/494 Mon, 27 Apr 2020 20:46:00 +0000
We have re-enabled e-mail verifications for now on the billing system as spammers are using 3rd party addresses for registration this time around.

UPDATE: Our Wiki is almost a goner due to the spammers, it's just too much to cleanup manually and good tools do not exist. We are looking for options. It's become very apparent that making MediaWiki to be easy to maintain for future is questionable. We might replace the wiki as it's not really been used to do community wise updates. Recent update re-enabled everyone editing and user registrations which we missed and has caused this spam issue. It looks like MediaWiki themselves admit that spam is going to be forever non-solvable issue, and MediaWiki does not come out of box with any anti-spam measures, everything is an extension which needs configuration etc. and has limited functionality.  Considering options still.]]>
<![CDATA[New service deliveries delayed: Automation system is lagging]]> https://pulsedmedia.com/clients/index.php/announcements/493 https://pulsedmedia.com/clients/index.php/announcements/493 Mon, 27 Apr 2020 12:16:00 +0000
This is due to large volume on our backend and it's been a little bit lagging. This has been now fixed and the backend system is now catching up.
Estimate a few hours to catch up on all the backend actions, including provisioning.

Really sorry for the inconvenience!

If your new service has not been provisioned within a few hours from this announcement, please open a ticket and we will sort it out for you swiftly!]]>
<![CDATA[Moving a service to another client is easy]]> https://pulsedmedia.com/clients/index.php/announcements/492 https://pulsedmedia.com/clients/index.php/announcements/492 Tue, 21 Apr 2020 12:58:00 +0000
We added a K&B article about this at http://pulsedmedia.com/clients/knowledgebase.php?action=displayarticle&id=76 and it's really quite easy and simple process, just open a billing ticket. There'll be a few replies to make it happen but should be relatively easy.

]]>
<![CDATA[Network congestions everywhere in the world -- and now even on our own network]]> https://pulsedmedia.com/clients/index.php/announcements/491 https://pulsedmedia.com/clients/index.php/announcements/491 Tue, 14 Apr 2020 20:49:00 +0000 Just like everyone, Google, Microsoft, Apple, Amazon, Youtube etc. are hitting some bottlenecks in their networks - so are we.

We are now averaging very close to our maximum transit capacity, so we are definitively getting micro bursts to roughly at our maximum capacity for outbound traffic.
Inbound traffic has a little bit more than doubled as well from 2 months back.

This is 2 fold - A LOT of new users at once, and everyone using their services at once. New seedboxes always consume a lot more than say a 2 year old which are more burst usage in their nature.
Everyone is staying at home and enjoying their seedboxes and other online services, so usage is peaking as well.

This usage level will subside over time and throughputs will stabilize, however if it continues we are prepared to increase our transit capacity as well.
There is no serious bottlenecking yet on overall capacity, but there can be individual TCP connections which are running slower as expected due to a microburst (speed goes down and slowly ramps back up). Some individual racks might also be seeing some bandwidth bottlenecking right now, which we will be working on within the next 2 weeks -- there is a backlog of server deliveries which needs to be cleared first.

Long term; We have lately done some investments for new gear which we are waiting to arrive, after they arrive we will start slowly taking them into operation, after which we will begin opening new links and peerings - but these are very slow projects.

]]>
<![CDATA[Global Free Bonus Disk Limit Increased by ~5%]]> https://pulsedmedia.com/clients/index.php/announcements/490 https://pulsedmedia.com/clients/index.php/announcements/490 Thu, 02 Apr 2020 13:57:00 +0000
The bonus disk global maximum grows normally over time as well, and equals to roughly 18% of our capacity making it very significant boost to disk quotas for our userbase.]]>
<![CDATA[M1000 Corona special restocked]]> https://pulsedmedia.com/clients/index.php/announcements/489 https://pulsedmedia.com/clients/index.php/announcements/489 Mon, 30 Mar 2020 19:55:00 +0000
There are plenty of Value250 series available as well.]]>
<![CDATA[Internet route congestion all over Europe]]> https://pulsedmedia.com/clients/index.php/announcements/488 https://pulsedmedia.com/clients/index.php/announcements/488 Mon, 23 Mar 2020 15:01:00 +0000 As everyone is at home and spending time streaming videos all the time, there are a lot of ISPs with severe congestion in their networks right now.

There's been talk that Netflix would be going only 480p and Youtube defaulting to 480p in Europe, at least for some networks.

If you are right now having lower than usual FTP/SFTP speeds that might be the reason. Check first that it is consistently lower than usual before opening a ticket.
And no, speedtest.net is not valid proof that you have available bandwidth to anywhere in the globe. Every single connection has different routing, and speedtesting within your ISPs network does not prove that ISP actually has bandwidth available outside their own network.

We have plenty of capacity available and are not seeing congestion in our own network at this time.

]]>
<![CDATA[The Rona Relief Special Re-stocked M1000 and added SSD option]]> https://pulsedmedia.com/clients/index.php/announcements/487 https://pulsedmedia.com/clients/index.php/announcements/487 Wed, 18 Mar 2020 21:27:00 +0000
SSD special was added, 1-3 years with compounded total 50% recurring discount.

Check them out at: https://pulsedmedia.com/specials.php]]>
<![CDATA[State of Emergency in Finland due to global pandemic]]> https://pulsedmedia.com/clients/index.php/announcements/486 https://pulsedmedia.com/clients/index.php/announcements/486 Mon, 16 Mar 2020 19:14:00 +0000 This is currently estimated to last until 13.04.

Freight still moves normally, most services are running normally etc. but gatherings of more than 10 people are not allowed and people should remain at home.

This situation means following changes during this period for our services:
 * Custom server builds might be delayed by up to 5 weeks
 * Custom server builds for non-stock items (ie. ZEN MiniDedis often require new parts ordered in) might be delayed as much as 5 weeks.
 * New stock for shared services are delayed by significant margin

We have already had a spike in custom server orders as of late and stocks for many parts are low. Re-stocking for small quantities is limited now during this period. Every server will still get delivered, and there is no additional cost for our customers due to delays. Physical interventions with servers are likely going to be delayed as well.

Shared services we have mostly plenty of stock, but for some newer offers we might go out of stock during the next month, such as Dragon-R. For M10G, M1000 and Value1000 series we have close to 1PiB of vacant capacity in total, so we do not expect them to go out of stock. MDS series will get replenished as soon as possible, there is a couple of servers pending to be put back to production - after those it might be 2 to 3 weeks before we get them back to stock. F19-MDS-XEON-12TB we have only 1 remaining and no new stock in the foreseeable future.]]>
<![CDATA[Emby on shared seedboxes]]> https://pulsedmedia.com/clients/index.php/announcements/485 https://pulsedmedia.com/clients/index.php/announcements/485 Sat, 07 Mar 2020 16:13:00 +0000 It is quite simple to setup, if you are on higher end service say Dragon-R it's completely OK to run Emby on our servers as well even with transcoding, just remember to change ports.
Make sure there is plenty of free cpu time

Installation is as simple as something of this sor, login via SSH andt:

wget -O emby.deb https://github.com/MediaBrowser/Emby.Releases/releases/download/4.3.1.0/emby-server-deb_4.3.1.0_amd64.deb
dpkg -x emby.deb ~/

edit opt/emby-server/bin/emby-server  (ie. use vim or nano): Change APP_DIR to start with $HOME/ and EMBY_DATA with EMBY_DATA=$HOME/[DATA_DIRECTORY]
edit var/lib/emby/config/system.xml: change PublicPort, HttpServerPortNumber, PublicHttpsPort, HttpsPortNumber to new random ports

launch with: opt/emby-server/bin/emby-server
and browse to the port you chose. recommend https.

We will do better guide and preconfigurator at a later time.

]]>
<![CDATA[Network issue for some high latency transfers identified - ETA for fix]]> https://pulsedmedia.com/clients/index.php/announcements/484 https://pulsedmedia.com/clients/index.php/announcements/484 Fri, 14 Feb 2020 21:40:00 +0000 We have identified an issue with high latency transfer rates in some cases.

Turns out our traffic was solely exhausting upstream router fabric, but even increasing fabric capacity did not solve this, or even moving some of our links to an other linecard.

We will move those links to a bigger router and closer to next hop fiber links, with increased capacity sometime Monday after 16:00.
This should solve the issues with some high latency transfers. Essentially a few packets here and there drops, causing the TCP window sizes (essentially how fast to transfer) to decrease significantly. This also causes fluctuation in transfer rates.
Not all higher latency transfer rates are affected, but many are.

We remained peak at ~70-75% of maximum capacity. The router also seemingly, on paper, had more than enough capacity but that particular version has essentially a "gotcha" with fabric configurations, but even distributing the load more did not solve this. It is not every day you exhaust chassis router fabric :)

---

We have also noticed more and more ISPs are throttling FTP traffic. If you are having FTP speed issues, please move to SFTP. Also ensure you have multiple threads if you are not achieving decent throughput.

]]>
<![CDATA[Upgrade options for ZEN MiniDedis]]> https://pulsedmedia.com/clients/index.php/announcements/483 https://pulsedmedia.com/clients/index.php/announcements/483 Fri, 14 Feb 2020 03:27:00 +0000 You can now upgrade your existing ZEN MiniDedi.

It has come to our attention that some of the users need to upgrade their machines overtime, so we have allowed this now. Just view the service, and there is Upgrade/Downgrade options menu item on left side.
When we have stock these will be completed within the normal NBD+5 schedule, when no stock for the requested part and we have to get one from our vendor the typical 3-4 weeks turnaround applies.

Downgrades however are not possible, if you make a downgrade order it will be reverted back. Reason is that it takes staff time to make these changes, and it would be counterproductive for DC technician time spending having people going back and forth. If you need to downgrade, please order a new machine separately. Normal 3-4 weeks turnaround applies on all new orders.

]]>
<![CDATA[Did you know that you gain extra disk space over time?]]> https://pulsedmedia.com/clients/index.php/announcements/482 https://pulsedmedia.com/clients/index.php/announcements/482 Tue, 11 Feb 2020 11:09:00 +0000 You can see the amount you have extra in welcome page, small text under the meter.

The policy is listed here: https://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy

]]>
<![CDATA[Dragon-R Release! Top spec seedboxes with 20Gbps, RAID10, AMD EPyC processors, NVMe OS drives etc.]]> https://pulsedmedia.com/clients/index.php/announcements/481 https://pulsedmedia.com/clients/index.php/announcements/481 Fri, 07 Feb 2020 20:04:00 +0000 https://pulsedmedia.com/dragon-r-20gbps-rtorrent-seedboxes.php

Dragon-R series has been released!

This is very top of the line service, utilizing top end AMD EPyC CPUs, NVMe Drives and huge RAID10 arrays. Network top priority throughput, semi-guaranteed disk bonus storage etc.!
Check more specs out at the product page.

Existing Dragon series users may contact support for upgrade options.

]]>
<![CDATA[Major software updates: Custom rTorrent configs, HTTPS SSL Certificates]]> https://pulsedmedia.com/clients/index.php/announcements/480 https://pulsedmedia.com/clients/index.php/announcements/480 Fri, 07 Feb 2020 13:10:00 +0000 You cannot override performance settings but you can setup custom watch directories, schedules etc.

If your configuration has not been updated yet for .rtorrent.rc.custom support, please contact support to enable it for you.

HTTPS SSL:
We removed older TLSv1 support, and updated ciphers.
Further we have started adding signed certificates via Let's Encrypt to servers. It is first coming to newest servers before rolling out to older servers. All new servers should have a Let's Encrypt certificate as well.

If you want your server to skip ahead on the list for regular SSL certificate, please contact support and we'll make it happen.

Full rollout will take a long time to happen, as there is limitations on how many certificates per domain you can create weekly. Eventually all servers will have certificates however.
This has been enabled by the great work at Electronic Frontier Foundation (EFF) for making automation better.

If your server does not have yet a Let's Encrypt certificate and you get a warning: It only means there is no "trusted root authority", ie. central body who has approved that certificate. With self signed certificates the data is encrypted just as well as with a signed certificate.

Rtorrent failsafes:
We added more failsafes against multiple rtorrent instances running from same session etc. this was one of the 0.9.8 version update regressions.

As usual, you can read the full changelog at: https://wiki.pulsedmedia.com/index.php/PM_Software_Stack#Changes_2020]]>
<![CDATA[Storage series upgraded to 10Gbps and ZEN MiniDedi base price lowered]]> https://pulsedmedia.com/clients/index.php/announcements/479 https://pulsedmedia.com/clients/index.php/announcements/479 Fri, 24 Jan 2020 12:17:00 +0000 Storage series has been upgraded to the M10G server group for 10Gbps speeds. No other changes, prices remain the same but you get 10Gbps now instead of 1Gbps.
All existing storage series users may contact support to get moved to the M10G servers.
https://pulsedmedia.com/storage-seedbox.php

ZEN MiniDedi server base price has been drastically lowered - cut in half actually. Component prices remain the same. Further, we have doubled the annual prepayment discount to 20% from 10%. These changes mean that at lowest cost you can get an dedicated NVMe server for just ~20.11€ per month!
https://pulsedmedia.com/zen-minidedi.php

]]>
<![CDATA[Regular bonus disk quota restored]]> https://pulsedmedia.com/clients/index.php/announcements/478 https://pulsedmedia.com/clients/index.php/announcements/478 Wed, 22 Jan 2020 12:32:00 +0000
Sorry about that. Distribution is now working again and there is about 100TiB more to be distributed among loyal service users. It will take probably couple weeks as this is added little by little.

Premium service tiers (ie. Dragon) had their priority bonus disk quota distributed normally during this period.]]>
<![CDATA[Wiki restored and upgraded]]> https://pulsedmedia.com/clients/index.php/announcements/477 https://pulsedmedia.com/clients/index.php/announcements/477 Tue, 21 Jan 2020 13:41:00 +0000
Sorry for it being down for a few days - We use a 3rd party datacenter for services like this, so it was a little bit slow to get going.]]>
<![CDATA[Sources of electricity in our datacenter - Net carbon negative, and by accident?]]> https://pulsedmedia.com/clients/index.php/announcements/476 https://pulsedmedia.com/clients/index.php/announcements/476 Mon, 20 Jan 2020 12:15:00 +0000 This is a bit of trivia thing, but interesting one never the less, ~70% of the electricity we use is either nuclear or renewable: https://www.helen.fi/en/company/energy/energy-production/origin-of-energy

With the donation we gave to Eden Reforestation Project this december A LOT of trees got planted. Each tree stores ~0.5 to 5tn of Co2 during it's lifetime approximation of 100 years - most of this is during it's early growth stages (https://granthaminstitute.com/2015/09/02/how-much-co2-can-trees-take-up/ , https://www.unm.edu/~jbrink/365/Documents/Calculating_tree_carbon.pdf , https://medcraveonline.com/FREIJ/FREIJ-02-00040.pdf , https://www.thenakedscientists.com/forum/index.php?topic=49841.0). Each 1kWh of electricity produced using coal emits about 0.94kg of Co2 (https://carbonpositivelife.com/co2-per-kwh-of-electricity/, https://www.quora.com/How-much-CO2-is-produced-per-KWH-of-electricity ). Some quick back of the napkin maths shows that even on "worst case" the datacenter electricity use was offset for 9 years via that donation, and on best case 180 years.

It was a little bit hard to come with simple answers, there are a lot of variables - how many trees Eden Reforestation Project manages to plant per 100$ varies quite a lot, at best 1000 and at worst something like 130. What type of trees are being planted, and how the climate is around that location, ie. how fast they can grow is another variable. Further the specific type of coal power plant emissions vary heavily, that 0.96kg/kWh (960kg / MWh) might be bit high as well, as in Finland a heavy emphasis is put on the emission regulations.

Of course, this does not include the materials, people working etc. only the datacenter electricity consumption. If people are interested in this we can run the maths for this as well and make a full length blog post, for as much as we can find source material references.

There is more to climate than just Co2 as well - and a lot of scientists are certain that Co2 does not play a big role in reality, and even less human activity. The major greenhouse gas is actually water vapor, accounting for 96% of greenhouse gas effect or there abouts. Plants grow faster and in new places with increased Co2, and they need LESS water (Sahara used to be green when Co2 was high!) with higher Co2. They also release less water vapor into the atmosphere when there is more Co2 as they can more efficiently get the Co2 they need. Further, none of that accounts for solar activty etc. TL;DR; is that it is a very very complex subject, which needs more research, objective research, and less politics. Climate change is very very real, but that has been the case for our planet Earth always - and we've actually been enjoying for more than 10 000 years the most stable climate ever on geological record. Co2 atmosphere content has been 10x what it's been.

Here at Pulsed Media we've always been quite conscious about environment and recycling. A lot of the servers we use are 2nd hand (where it makes sense), giving it another life cycle instead of becoming ewaste - and we try to hang on it as long as we can. We recycle all the cardboard packaging materials as well. We work really hard on optimizing power usage despite enjoying very low cost rate of electricity compared to many other european countries, and cold climate most of the year for cooling efficiency. Some of the cooling capacity we use is also used to heat rest of the building instead of being dumped into the environment.

]]>
<![CDATA[Late fees removed]]> https://pulsedmedia.com/clients/index.php/announcements/475 https://pulsedmedia.com/clients/index.php/announcements/475 Fri, 10 Jan 2020 01:06:00 +0000
We have removed that late fee charge. It is much more important users are able to continue their services without excess costs, and that should serve our user base much better than charging those late fees.

If you have currently an open invoice with late fee which you wish to pay, contact support to get the late fee removed.]]>
<![CDATA[New Value1000 seedbox series - Updating the entry level offers with exceedingly incredible Value]]> https://pulsedmedia.com/clients/index.php/announcements/474 https://pulsedmedia.com/clients/index.php/announcements/474 Sat, 21 Dec 2019 19:16:00 +0000
This is quite an upgrade and change to the previous value series offer, and it is quite price competitive for the resources you are getting! Full 1Gbps bandwidth access, both up and down gives a huge boost to network speeds and allows you to download those torrents quite swiftly! This is made possible by move to RAID0 which has no write performance penalty like RAID5 has, it also offers much increased capacity at the expense of redundancy. For entry level seedboxes redundancy is not the most important factor, it is capacity and capability so the change to RAID0 is acceptable - you can always pick higher tier service for data redundancy afterall when required. Finally, the hardware has been upgraded, the standard server specification is now 6core opteron with 48GB of ECC RAM and 4x8TB 7200RPM HDDs. This is quite the bump from Value250 series which started out with just 12GB of ECC RAM and 4x2TB 7200RPM HDDs.

All of the above means much much higher capacity and capability than the previous Value250 series at an exceedingly affordable rates. The top of the line Value1000 is a whopping 8TB of storage with 8GB of RAM dedicated just for your rTorrent instance and Unlimited* traffic, at an incredibly low 23.99€ per month! This makes it only 3€ per TB of storage, without worries of going over your traffic limits. Of course there is also long term discounts of upto 15% for longer term purchases.

Combine all of the above with the proven track record of Pulsed Media delivering excellent services with ever better value for our users, industry leading service level agreement, well defined policies (terms of service, privacy policy etc.), over time growing storage quotas and solid servers you have one really good service package for entry level use.

The offers start from 1TB at 6.99€ a month, and you can check them all out at: https://pulsedmedia.com/value1000-seedbox.php

As usual, our help desk is open 24/7 for any questions you may have.




Note *: Unlimited traffic still has fair use and abusive use limitation, but for all practical purposes it is unlimited. The fair usage limitation is set to 60TB of traffic, which will require roughly ~200Mbps average bandwidth usage over the course of a month. If you need higher traffic limit, please contact support and ask for options.]]>
<![CDATA[rTorrent 0.9.8 regressions fixed + many servers updated]]> https://pulsedmedia.com/clients/index.php/announcements/473 https://pulsedmedia.com/clients/index.php/announcements/473 Sun, 08 Dec 2019 16:00:00 +0000 The last known regressions of 0.9.8/0.13.8 has been fixed and we have rolled out this update to quite a few servers today.

Please contact support if you notice more regressions.

]]>
<![CDATA[Regressions on rTorrent 0.9.8 update.]]> https://pulsedmedia.com/clients/index.php/announcements/472 https://pulsedmedia.com/clients/index.php/announcements/472 Sat, 07 Dec 2019 01:42:00 +0000 Watch directory is not functioning.
In some cases rutorrent settings window does not load and you get session directory UID error.

Settings pane & session directory UID has been fixed, watch directory not functioning has not been fixed yet. ETA for that is 2 business days.

If you get any of these, please contact support so we can queue your server for priority patching.


This is a good example why we do rolling releases. Only few servers were updated so far, and the impact should be limited to several dozen users.

]]>
<![CDATA[Software updates: rTorrent + ruTorrent, OpenVPN fixes etc.]]> https://pulsedmedia.com/clients/index.php/announcements/471 https://pulsedmedia.com/clients/index.php/announcements/471 Thu, 05 Dec 2019 13:54:00 +0000
Main changes is new rTorrent + ruTorrent setup and OpenVPN issues has been fixed.

  • rTorrent version 0.9.8 / libtorrent 0.13.8
  • ruTorrent 3.9
  • OpenVPN: Sometimes routes were not setup after boot for a long time. This has been fixed
  • OpenVPN: Cipher updated to AES-256-CBC
  • Info tab now has a nice daily chart for traffic use
  • Sox and nzbget added by default
  • tcpsack vulnerability mitigations (disable sack, firewall small tcp framesizes)

Some a little bit older updates but worth highlighting again are:
  • Resilio sync added (rslsync)
  • Sonarr/nzbdrone installing reworked
  • BTSync defaults now to 2.2
  • Syncthing added
  • rclone updated to 1.47

As usual you can see full changelog at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

After updating recommend rebooting the server.

If you have issues, please open a support ticket.]]>
<![CDATA[Black Friday is soon here! Get Ready!]]> https://pulsedmedia.com/clients/index.php/announcements/470 https://pulsedmedia.com/clients/index.php/announcements/470 Mon, 25 Nov 2019 18:09:00 +0000 They will be available at: https://pulsedmedia.com/specials.php

Target availability is from Thursday 12:00 GMT onwards.

There will be a lot of specials this year, some will be flash sales only available for an hour or two.]]>
<![CDATA[Seedbox Web GUI: HTTPS Tabs fixed]]> https://pulsedmedia.com/clients/index.php/announcements/469 https://pulsedmedia.com/clients/index.php/announcements/469 Sun, 17 Nov 2019 15:01:00 +0000 <![CDATA[GUI Not Loading today? No Wiki? Here is why.]]> https://pulsedmedia.com/clients/index.php/announcements/468 https://pulsedmedia.com/clients/index.php/announcements/468 Thu, 14 Nov 2019 13:19:00 +0000 We are big fans of distributing systems to separate servers and networks, including separate datacenters.

We have our static data for Seedbox web gui at such a discreet server, as well as our Wiki on that same server.
Today that 3rd party DC made a snafu - They disconnected the server for "non payment" - For which they never sent us a warning, nor does the server even appear on their renew server page.
They've had this kind of glitchyness in their billing for as long as we've been there, which is approximately 9 years. Infact, this particular server by itself is 8 years old now.

Now they are having issues reconnecting the server to their network. Server reads on their system as online and without any issues etc.
This has been typically a 1 minute automated task for them; Apparently not this time. Even calling them could not get this solved.

Resolution

We have spun up a new server for the seedbox web gui from backups, it is online now. It may take a little bit time for some as DNS caches need to refresh.
For the most part Web GUI should be back.

If the issue is persisting for you; You may need to refresh your DNS cache.

You can also access rutorrent directly by appending rutorrent/ to your access URL.

Wiki

Wiki will be restored soon, we have full backups of that as well, but will require some config etc. and we are hoping the old server comes back online so we can have the refresh database backup of the wiki. Our backup of the wiki database is missing some latest changes to one of the wiki pages.

So Wiki will be restored within 48hours.

Final words

This did not affect any of our core systems, only these 2. "Static" server for static seedbox gui files, and wiki. All services remain online.

We will update these servers now, and will probably separate static and wiki for future at the same time.

]]>
<![CDATA[Single's Day Specials! 11.11 Is Here, Let's Kick It Off!]]> https://pulsedmedia.com/clients/index.php/announcements/467 https://pulsedmedia.com/clients/index.php/announcements/467 Sun, 10 Nov 2019 19:19:00 +0000
Check it out at: https://pulsedmedia.com/specials.php

More is coming tho, so keep checking ;)]]>
<![CDATA[ruTorrent 3.9 update; Daily traffic use charts]]> https://pulsedmedia.com/clients/index.php/announcements/466 https://pulsedmedia.com/clients/index.php/announcements/466 Fri, 08 Nov 2019 20:27:00 +0000 Most new accounts will get this version, but old accounts will not be updated just yet.

Daily traffic consumption charts added to info page as well on updated servers.

Rolling release as usual.
Full changelog at http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

If you want to get to ruTorrent 3.9 early, contact support. We'll update your server and recreate your userspace.

]]>
<![CDATA[Network Maintenance: M10G series bandwidth increase]]> https://pulsedmedia.com/clients/index.php/announcements/465 https://pulsedmedia.com/clients/index.php/announcements/465 Mon, 04 Nov 2019 18:32:00 +0000
That should give a significant bump to throughput rates on this series of servers during usage peak hours.]]>
<![CDATA[ZEN MiniDedi: Example configs + extra info page]]> https://pulsedmedia.com/clients/index.php/announcements/464 https://pulsedmedia.com/clients/index.php/announcements/464 Fri, 01 Nov 2019 21:35:00 +0000 Check it out at https://pulsedmedia.com/zen-minidedi.php

All units are built to order. If you have a special request, contact sales.]]>
<![CDATA[ZEN MiniDedis: All pre-orders delivered]]> https://pulsedmedia.com/clients/index.php/announcements/463 https://pulsedmedia.com/clients/index.php/announcements/463 Thu, 24 Oct 2019 18:25:00 +0000
We will make a page and open regular orders for these servers shortly.
All ZEN MiniDedis are built-to-order.]]>
<![CDATA[Managed Dediseedbox Finland: Power of dedicated with shared ease of use!]]> https://pulsedmedia.com/clients/index.php/announcements/462 https://pulsedmedia.com/clients/index.php/announcements/462 Thu, 10 Oct 2019 15:04:00 +0000
You can check them out: https://pulsedmedia.com/managed-dediseedbox.php

Setup is INSTANT and all servers are in stock.]]>
<![CDATA[ZEN MiniDedi Pre-order discount extended!]]> https://pulsedmedia.com/clients/index.php/announcements/461 https://pulsedmedia.com/clients/index.php/announcements/461 Thu, 26 Sept 2019 21:37:00 +0000 http://pulsedmedia.com/clients/announcements.php?id=448

We have extended the pre-orders by a few more weeks as the HDD based variants has not yet been delivered. All SSD based pre-order systems are now online, as announced at: http://pulsedmedia.com/clients/announcements.php?id=460

Note that these pre-order discounts will end when the last of the pre-order systems has been delivered. SSD based systems will have rather quick delivery, current estimate is less than 3 weeks.

To order one of these sweet Ryzen dedicateds, just follow this link: https://pulsedmedia.com/clients/cart.php?a=add&pid=221&promocode=2019zendedipreorder
Remember to use that coupon code 2019zendedipreorder for the discount.]]>
<![CDATA[ZEN MiniDedi Update: All SSD based systems online, HDD nodes delayed]]> https://pulsedmedia.com/clients/index.php/announcements/460 https://pulsedmedia.com/clients/index.php/announcements/460 Thu, 26 Sept 2019 16:40:00 +0000
There are a number of HDD based systems waiting for delivery still however. We made a mistake and had assumed the 2.5" slots are full height - turns out they were not. We are working on the solution already, but waiting for parts deliveries. It'll be a few weeks at a minimum before this is solved.
All the hardware, including the HDDs have been delivered to us however, so once we solve this last step it will be very quick to get them online as well.

**EDIT** Pre-orders extended http://pulsedmedia.com/clients/announcements.php?id=461

]]>
<![CDATA[More Bonus Disk Quota! ]]> https://pulsedmedia.com/clients/index.php/announcements/459 https://pulsedmedia.com/clients/index.php/announcements/459 Mon, 09 Sept 2019 12:48:00 +0000 http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy

We increased how much bonus disk quota you get per euro paid, by moving the 33€ per percentage point to 28€. This gives a huge boost for each euro you have paid for your services. If you have multiple services, all your services get this bonus as this translates across all of your services.

Further, we have increased significantly the global maximum limit for all services, so during next few days a lot of people are going to get significantly more. Note that Dragon series has always been exempt from this limit.]]>
<![CDATA[Dedicated server line up update]]> https://pulsedmedia.com/clients/index.php/announcements/458 https://pulsedmedia.com/clients/index.php/announcements/458 Fri, 30 Aug 2019 20:46:00 +0000 https://pulsedmedia.com/dedicated-servers-finland.php

We have updated the example config line up. Contact sales when you want one, depending on model delivery time varies from 1½ weeks upto 6 weeks. We can also custom tailor to your needs.

We also have some 1U Dual Xeon 5600 series servers available which can be configured for upto 144GB Ram, 4x16TB HDDs and 2x10Gbps.]]>
<![CDATA[End of auctions: No new slots via auctions to be sold]]> https://pulsedmedia.com/clients/index.php/announcements/457 https://pulsedmedia.com/clients/index.php/announcements/457 Mon, 12 Aug 2019 19:57:00 +0000 We are stopping auction slots from being offered for now.

Reason is simple: There is a very tiny minority of users, very vocal and extremely demanding; but causing a lot of headache for our support staff in all manners and ways. Further, the new auction slots are now 80-90% of our helpdesk load, thus causing delays providing customer support for our regular service users.

Old, already created auction slots will continue to operate normally. But we do recommend you pick a regular service for a normal regular service.

]]>
<![CDATA[ZEN MiniDedi: The new Ryzen 3400G]]> https://pulsedmedia.com/clients/index.php/announcements/456 https://pulsedmedia.com/clients/index.php/announcements/456 Tue, 09 Jul 2019 09:32:00 +0000
The ZEN MiniDedi preorder is still going on: http://pulsedmedia.com/clients/announcements.php?id=448
Pre-order is extended until all preorder nodes has been delivered - we estimate this to happen by end of August.]]>
<![CDATA[Auctions Update: New Value250 A2 limited to 28days]]> https://pulsedmedia.com/clients/index.php/announcements/455 https://pulsedmedia.com/clients/index.php/announcements/455 Mon, 01 Jul 2019 11:38:00 +0000 https://pulsedmedia.com/seedbox-auctions.php

We wanted to offer higher number of slots at a faster pace and avoid some misunderstandings with this offer.

We are targeting to make a lot of slots available for each 24hour period under this new Value250 A2, so expect some fast paced auction action! :)

There is a huge difference however, this one will autoterminate after 28 days, and is limited to only 28 days. This avoids some misconceptions about the auction slots; A big reason we can offer them is to constantly load balance servers, offer some auction slots when there is free space available, lower the number of auction slots when not so that the regular offers remain available. A lot of people have skipped over the special terms and have misconstrued auction slots as a permanent offer with all the premium service perks included. This has made offering them a bit more difficult as so many has simply been ignoring the special terms over at http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Auctions_Terms_And_Conditions

With the 28days limit we get to constantly load balance the servers according to resources AND keep offering these auction slots.
The special terms still apply. With 28days slots there should be no misunderstanding however.

Hope you check it out, and if you have any feedback for us please feel free to contact support! :)]]>
<![CDATA[Superb M10G Series Update: Now downspeeds increased to 10Gbps as well!]]> https://pulsedmedia.com/clients/index.php/announcements/454 https://pulsedmedia.com/clients/index.php/announcements/454 Tue, 25 Jun 2019 14:34:00 +0000
From now on the M10G Series will be 10G/10G symmetric speeds providing you those really nice and fast download speeds as well.

This change is being applied slowly to existing users of the servers (necessitates rtorrent restart etc.). If you want it applied now, just open a ticket.]]>
<![CDATA[Cooling issues with specific server model]]> https://pulsedmedia.com/clients/index.php/announcements/453 https://pulsedmedia.com/clients/index.php/announcements/453 Mon, 20 May 2019 14:53:00 +0000
We have talked about it before at http://pulsedmedia.com/clients/announcements.php?id=435 and http://pulsedmedia.com/clients/announcements.php?id=430

We have a list of priority fix nodes which we will begin working on within a couple of weeks. The issue is no where as bad as year ago, as we are now catching this early on. It's mainly a few random crashes here and there, and lower than expected performance in some others. Most of those with failing remote management has also been already replaced a long time ago.

If you suspect your server is one of those affected, please contact support and we can have a look. the specific server model affected by this is all running 6core Opteron CPUs to distinguish quickly, either 24GB or 48GB ram and 4 drives, 1Gbps connectivity.

We just purchased nearly 500 copper heatsinks to start installing on these, and waiting for their arrival to Finland :) Once we have received them we will start working on larger batches. All servers will be individually noticed about maintenance when it is happehning.]]>
<![CDATA[ZEN MiniDedi Update]]> https://pulsedmedia.com/clients/index.php/announcements/452 https://pulsedmedia.com/clients/index.php/announcements/452 Mon, 20 May 2019 09:40:00 +0000 http://pulsedmedia.com/clients/announcements.php?id=448

The current state is that we have several of these nodes and have been stress testing them out.

We are currently working on cooling solutions, and the power delivery mechanisms needs to still be ironed out in a manner where we can use high efficiency server PSUs, instead of the "bricks" supplied with the DeskMini.

Each step takes a lot of validation time, and the cooling stuff simple testing each step takes several days of stress testing. But we are progressing steadily.]]>
<![CDATA[Reminder: We do have affiliates system which pays recurring! Invite Your friends, earn money!]]> https://pulsedmedia.com/clients/index.php/announcements/451 https://pulsedmedia.com/clients/index.php/announcements/451 Tue, 30 Apr 2019 13:28:00 +0000
So when You invite your friends you can also make some earnings! :)

To activate, just visit our billing portal and activate affiliate account. It will provide you with links etc.


Example:
 - Refer 10 friends, each pays 10€ a month, nets you 7.50€ a month revenue for as long as they keep their services. They each keep their services for an average of 40 months, thus you earn a total of 300€ !
 - Refer a user which signups up annually at rate of 119.88€ and you get 8.99€ each time, he keeps the service for 6 years and you net 53.94€ !!]]>
<![CDATA[Issues with seedbox GUI? Using Pi-Hole?]]> https://pulsedmedia.com/clients/index.php/announcements/450 https://pulsedmedia.com/clients/index.php/announcements/450 Mon, 29 Apr 2019 11:32:00 +0000
This might be because we load some javascript libraries from google servers, we will look into this and move all libraries to our own server at some point.]]>
<![CDATA[Software updates]]> https://pulsedmedia.com/clients/index.php/announcements/449 https://pulsedmedia.com/clients/index.php/announcements/449 Tue, 23 Apr 2019 17:39:00 +0000 "btsync" has been changed point to btsync 2.2. You can keep using version 1.4 as "btsync1.4" still.
rclone has been updated to v1.47

Many bug fixes has been made as well.

Typical rolling release, so your server might not have these yet. If you want to have them now, please contact support.
A bunch of servers is updating as we speak.]]>
<![CDATA[NEW ZEN MiniDedi Pre-orders are open! NVMe Dedicated from 27.69€ per month!]]> https://pulsedmedia.com/clients/index.php/announcements/448 https://pulsedmedia.com/clients/index.php/announcements/448 Sat, 13 Apr 2019 18:42:00 +0000 The NEW ZEN MiniDedi is HERE!

We have been working to bring to the market some super efficient and low cost new AMD ZEN based dedicateds with extreme performance and capability.

We have finally finalized the specifications for them and are ready to start taking in orders!
The preorder version will be fully hardware customizeable! You can pick from very low end to fairly high end config with multiple NVMe and SSDs. If there is some specific addition to have (either M.2, 2.5" SATA or USB) please contact our support and we'll see if we can accomodate you. You can pick upto 4TB SSD drives or 5TB HDDs and Upto 32GB of fast 3000Mhz memory!

These are really amazing compact packages; There really is no sensible off the shelf server solution which could bring this kind of efficiency all around and cost performance all around! For the price point these are some extremely capable servers with very high CPU and I/O performance, the Ryzen 5 2400G scores above 700 Cinebench and ~10 000 passmark! With the dual NVMe drives AND couple SATA SSDs combined you will have ALL the I/O performance you will need for any task. This kind of performance is usually reserved for servers 10 times the cost! For example, this server can sustain roughly 5xPLEX 1080p transcoded stream simultaneously.

When you combine all of that with our amazingly efficient datacenter operations and network you have some truly capable servers at your disposal! The environmentalists among us can also rest at ease, we do operate one of the most green datacenters in the world, constantly making great strides to reach ever better PUE! (PUE=Infrastructure efficiency) We have discussed this before in our blog, and we continue to make new records every year.

This pre-order comes also with MAGNIFICENT DISCOUNTS BUILT-IN! All pre-orders will be 20% OFF from list pricing using the below promocode.  You get to keep that discount for as long as you keep the server. On top of that, there is 10% additional discount for annual payments. These two combined can make for a SUPER LOW COST NVMe server at just 27.69€ per month! oO; :O On the higher end, You can have a 2x 2000GB NVMe, 32GB Ram, 2400G for just insanely low price of 50.36€ per month! :O :O This uses the brand new Intel 660P QLC NVMe drives, but you can also opt for more usual Samsung 970 Evo TLC NVMe drives for even higher performance.

Deliveries will begin by end of May. Pre-order pricing will be available until every single pre-order has been delivered. After which we will standardize to few most common specifications.

Here is the base specs:

  • Chassis: ASRock A300 DeskMini  - AN Amazing super small form factor motherboard and casing measuring ONLY 155x155x80mm!
  • M.2 x4 + M.2 x2/x4 slots!
  • 2x 2.5" slots for additional SSD or HDD!
  • 1Gbps Unmetered Volume Network
  • Any Linux or *BSD Distro of your choice
  • Full ROOT Access; Install anything you wish!
  • SLA Gold Level!
  • 14-day moneyback guarantee*
  • Hardware technical support: NBD+5 (ie. OS reinstalls, component replacements).

You can get yours at: https://pulsedmedia.com/clients/cart.php?a=add&pid=221&promocode=2019zendedipreorder
Use promocode 2019zendedipreorder




*) 14-day moneyback guarantee, pre-order special terms: You can request refund until 14day after delivery at any time, but total maximum 60 days after pre-order. Depending on the pre-order volume we however expect delivery much for your server much before this 60 day maximum limit for refund.
]]>
<![CDATA[Service Level agreement updates + auction special terms updates]]> https://pulsedmedia.com/clients/index.php/announcements/447 https://pulsedmedia.com/clients/index.php/announcements/447 Mon, 01 Apr 2019 11:50:00 +0000 We have some GREAT NEWS!

We have decided to give a major bump officially to SLA standards. For a long time we have been already giving greatly boosted amounts of compensation for any downtime in certain cases. Now we have made it official! This is GREAT news for those in higher end services.

Platinum tier: Dragon, 2012 series: Ratio 1:3 ! You get 3 times as many days as compensation as the downtime was.
Gold tier: Max250, Super250, M1000, M10G series. Ratio 1:2
Silver: Value250, Super50, Ratio is 1:1 (ie. same as old standard).
Bronze: Auctions and one off specials: Very limited SLA

Auction special terms update is the SLA Bronze level instead of no SLA at all.

Check the full updates to SLA at http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Seedbox_SLA_Policy

]]>
<![CDATA[Scheduled network maintenance Friday 1st of March]]> https://pulsedmedia.com/clients/index.php/announcements/446 https://pulsedmedia.com/clients/index.php/announcements/446 Wed, 27 Feb 2019 19:08:00 +0000 There will be some network maintenance going on during Friday.

Rough schedule is between GMT 10:00 to 21:00.

There may be several outages during this period of time, please be patient as we work.

]]>
<![CDATA[Issue with several nodes resolved]]> https://pulsedmedia.com/clients/index.php/announcements/445 https://pulsedmedia.com/clients/index.php/announcements/445 Wed, 27 Feb 2019 19:02:00 +0000
Issue was caused by a failed power supply tripping the breaker.]]>
<![CDATA[Changes to auction seedbox special terms]]> https://pulsedmedia.com/clients/index.php/announcements/444 https://pulsedmedia.com/clients/index.php/announcements/444 Sat, 23 Feb 2019 14:13:00 +0000 http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Auctions_Terms_And_Conditions


The primary changes are:

"We reserve the right to re-provision your account at any time without prior notice, wiping your data and to move you to another server"
this means we might re-allocate you to another server without data migration at any given time, so always keep backups of your important data if you are using an auction seedbox. Reason for this is that at times we need to load balance servers and the extremely low price point of auction slots really do not give us much resources to manually micromanage accounts like that. The alternative in the past has been to cancel your auction slot at end of billing term.

"No bonus disk quota"
This means you will not gain extra disk space over time with auction slots, this is a perk for long time users and we felt like it was not right for those on regular plans that bottom priced auction slots would get the same perks as long time regular users.]]>
<![CDATA[Switch issue (UPDATE 04:29 **RESOLVED**)]]> https://pulsedmedia.com/clients/index.php/announcements/443 https://pulsedmedia.com/clients/index.php/announcements/443 Tue, 01 Jan 2019 00:10:00 +0000 A significant number of servers are down due to a unknown switch issue. We are working on it right now.

Sorry for something like this to happen at such an unfortunate time; Middle of new years celebrations!

Update 1 (00:35): All switches responsive and rebooted, issue is not with swtiches. 68 nodes are afflicted. On site intervention scheduled, current estimated full resolution 1hr 45minutes.
Update 2 (01:11): Estimated recovery time 1hr 15minutes. Suspected Denial of service attack exploiting hardware weakness, relating to august issues.
Update 3 (03:51): 12 nodes still down, working on them.
Update 4 (04:29): Resolved. All nodes up and running. relates to August issues. Diagnosis still undergoing.

Update 5:
Sorry for the downtime! All servers are currently response and signals good status without issues.

Diagnosis will continue for quite a while after the holidays. It would be highly unlikely for this to be co-incidence in regards of near perfect timing to maximize downtime; this number of servers simply do not crash without a reason all of sudden.

]]>
<![CDATA[Bonus disk quota global maximum bumped up significantly]]> https://pulsedmedia.com/clients/index.php/announcements/442 https://pulsedmedia.com/clients/index.php/announcements/442 Tue, 04 Dec 2018 13:48:00 +0000
It is such a high number it will probably take many weeks to allocate and spread.

Do note that it is still limited by the available resources of your particular server, and bonus disk quota is bonus, it is not guaranteed in any fashion.
However, it is semi-guaranteed in Dragon series in the sense that they get allocation beyond global max limit always if the server has free resources available.]]>
<![CDATA[Blog has been restored]]> https://pulsedmedia.com/clients/index.php/announcements/441 https://pulsedmedia.com/clients/index.php/announcements/441 Mon, 26 Nov 2018 08:13:00 +0000
https://blog.pulsedmedia.com/]]>
<![CDATA[10G Shared seedboxes in beta testing]]> https://pulsedmedia.com/clients/index.php/announcements/440 https://pulsedmedia.com/clients/index.php/announcements/440 Thu, 25 Oct 2018 16:30:00 +0000
This series uses a new server model as well 10G Uplinks. These servers should be easier to maintain and easier to extend capabilities in future.

Look at all the specifics, and grab a beta test slot at: https://pulsedmedia.com/m10g-seedbox.php

]]>
<![CDATA[Billing system minor issues corrected]]> https://pulsedmedia.com/clients/index.php/announcements/439 https://pulsedmedia.com/clients/index.php/announcements/439 Sun, 07 Oct 2018 02:35:00 +0000
This caused that a lot of empty emails were mailed out, and some pages not loading properly.

This issue has been fixed now. Sorry for the inconvenience.]]>
<![CDATA[Beta testing seedbox auctions]]> https://pulsedmedia.com/clients/index.php/announcements/438 https://pulsedmedia.com/clients/index.php/announcements/438 Fri, 28 Sept 2018 11:15:00 +0000
Check them out at: http://pulsedmedia.com/seedbox-auctions.php

]]>
<![CDATA[Blog offline, work in progress]]> https://pulsedmedia.com/clients/index.php/announcements/437 https://pulsedmedia.com/clients/index.php/announcements/437 Sat, 22 Sept 2018 11:55:00 +0000
We used a 3rd party we did not use for anything else for distributing services around for emergency situations, and this company we used for the webhosting was not profitable and has closed their doors.

We are trying to get the latest copy without vendor specific extras of the blog right now to set it up elsewhere. If that fails we have another backup which fortunately does contain all posts etc. just not the latest code updates and statistics.]]>
<![CDATA[Dragon and SSD Series Updates!]]> https://pulsedmedia.com/clients/index.php/announcements/436 https://pulsedmedia.com/clients/index.php/announcements/436 Mon, 17 Sept 2018 10:15:00 +0000 Dragon and SSD Series Updates!

Both saw a really great update! :) We updated both series plans to match today's new server models, and increased the range of options from smallest to largest! :)

This saw the Dragon Mushu, the entry level Dragon to increase to 1.5TB (1365GiB) of disk space, at just 13.99€ per month! Where as the largest option is a incredible 12TB of RAID10 Disk Space! In some cases this means you get a dedicated server for you; 4x8TB RAID10 = 16TB, and if you have 12TB + bonuses there won't simply be space for anyone else. Thus also availability is very limited.

On the SSD series we turned the slider from bandwidth heavily towards space, in reality it is so few people who needs those extreme traffic levels. That being said, it still goes upto 60TB on traffic. Biggest available SSD plan is now a whopping 2TB! Best of all, it is still quite affordable at just 59.99€ a month. Series starts from just 5.99€ a month for 150GiB, but we recommend the 300GiB plan at 9.99€ a month.

]]>
<![CDATA[Issues with particular server model has stabilized now]]> https://pulsedmedia.com/clients/index.php/announcements/435 https://pulsedmedia.com/clients/index.php/announcements/435 Thu, 30 Aug 2018 11:47:00 +0000
We also now have plenty of parts in stock and will be doing the cooling enhancements as part of routine maintenance in future to stop these from happening.

Further, we have purchased a lot of new servers as well, different manufacturer and models to decrease the percentage of servers on this particular model, even tho we do not really expect the chipset cooling to be an issue anymore.]]>
<![CDATA[Software stack updates]]> https://pulsedmedia.com/clients/index.php/announcements/434 https://pulsedmedia.com/clients/index.php/announcements/434 Mon, 20 Aug 2018 16:30:00 +0000
Some highlights are:
 * Flexget installation fixed
 * Python fixes in general
 * UnionFS added
 * Sshfs added
 * S3fs added
 * GdriveFS added (may or may not work, 100% untested  .....)
 * FTP 16 connections max per user, doubled max instances per server to 60
 * rclone updated to v1.42
 * Genisoimage + xorriso added]]>
<![CDATA[Free seedbox offer temporarily disabled]]> https://pulsedmedia.com/clients/index.php/announcements/433 https://pulsedmedia.com/clients/index.php/announcements/433 Mon, 20 Aug 2018 13:04:00 +0000
This was way too often misconstrued as being fully fledged, fully blown and equivalent to a 10-15€ a month service. Which it was not, and it came with exactly 0 guarantees. It was not even based on our own DC or hardware. That 0.20€ payment was often misconstrued as it being fully fledged paid seedbox, when in reality we did not receive any of this money, and even if we refunded that payment we ended up paying 0.16€ per registration, as the money was given to the end user, but we only got 0.04€ back of this.

To top this out, with GDPR it became fuzzy legality with the e-mail marketing accompanied by that offer, despite the disclaimers.

We are considering alternative "feel it out" offers. In the meanwhile however, remember that we have 14 day moneyback so you can feel out the service you want.

In similar fashion, and partially even for same reasons, we have disabled signups for services with under 6€ a month cost for the moment.]]>
<![CDATA[1 server down]]> https://pulsedmedia.com/clients/index.php/announcements/432 https://pulsedmedia.com/clients/index.php/announcements/432 Mon, 20 Aug 2018 13:02:00 +0000
This server has completely failed motherboard and will be replaced soon. Those users will get very hefty SLA compensation.

We are still waiting for more hardware for the enhancements for cooling and eventually the changes will be rolled out on every single server of this model.]]>
<![CDATA[0 servers down]]> https://pulsedmedia.com/clients/index.php/announcements/431 https://pulsedmedia.com/clients/index.php/announcements/431 Mon, 13 Aug 2018 23:19:00 +0000
We are hoping situation remains so at least until we get more hardware in to apply the fixes.

A bunch more of servers got the "janky" 120mm fix today, and the process is now honed in. Takes only roughly 30minutes per server to do when processing in larger batches.

More hardware should arrive from Germany by end of the week or early next week, and we will begin the work from servers with lowest uptime (=crashiest) at that point. Some of this server model is working perfectly fine, and the chipset temps are almost as low as on those with the cooling fixes.]]>
<![CDATA[One server Model Crashing "left and right". Is Your Server Down Right Now? Check this out]]> https://pulsedmedia.com/clients/index.php/announcements/430 https://pulsedmedia.com/clients/index.php/announcements/430 Sat, 11 Aug 2018 14:04:00 +0000 It has been a mad mad mad week over here at Pulsed Media!

One of the server models is having serious issues, this server model we have in large numbers. They are now crashing "left and right" with no obvious reasons, and remote management is extremely flaky on this model; American Megatrends MegaRAC so doing hard reboots and debugging this has been a nightmare.

This began some time ago, but it was very random, very rare occurence. Since a node crashing again could take 4-5 months, and there never has been any error messages or any solid leads, never mind a way to reproduce; The issue this has been really hard to debug. Only common things are: This specific server model, server is not idling and has some load on it, server sometimes comes back just enough to process some automation commands. But even the load could be as low as 50Mbps average. And that coming back just enough to process some commands means we don't always catch them in time immediately and half down server could linger on ... We monitor actual responsiveness not "ping" to the server, while that ensures it's actually up and not fake up (ping responds, nothing else), it also means our downtime threshold is rather high to avoid false alerts.

We have tried a myriad of kernel settings as well, older versions of software and newer. Complete hardware replacements of course, since everything points to hardware issue, but generally that did exactly: Nothing. Only thing remaining common on those setups are the drives, but even the model, sizes etc. between crashy nodes changes etc. Our main lead at the time was it must be capacitors failing on the motherboard due to sheer weirdness of the crashes and issues, and failing caps can cause all kinds of weird issues.

Crashes were rare, but has been becoming more common in the past 6 months. There was no patterns prior to a month ago or so, same servers started crashing frequently. Roughly a week ago all this changed, we were facing big amounts of crashes, and despite knowing remote management was setup, tested and functioning correctly during HW swap, many of these failed to respond. We had a lot of crashes, but "fortunately" there was some nodes who crashed every single day, or even within hours. Finally we can do proper testing!

The motherboard model is very obscure proprietary format, MFG doesn't even list any bios updates for it. The "OEM" vendor has couple several BIOS versions, but even that was not a common factor.

Finally we found that some people with similar setups had issues with system chipset cooling, finally a lead! We started looking into this, and added direct chipset cooling on the crashiest nodes. This seemed to help, the worst offenders are still up many days after! Unfortunately, yesterday we did for more nodes this, and one of the nodes crashed soon after. This is still statistically insignifcant but worry-some, that node could actually have bad motherboard, bad capacitors etc. But enter the typical Finnish problem: No one has any stock, and even common parts are hard to find! We have to wait for more 40mm fans to arrive all the way from germany, because they are hard to get hold off in Finland without unknown length delivery times, or stupid high prices. Can you imagine paying for the cheapest possible 40mm fan 15€ (This is Akasa's small fan, i recall this going normally for more like 1.50€!), and yet they don't even tell you how many they have for sale?! And we need something like 60 of them, by yesterday Thank You! 

We can fortunately use 120mm fans with some modding on them in this server model, and albeit stupid high prices once again, there is atleast some available just 1hour drive away. Only about 4-5x normal market pricing each.

How can chipset overheat?! Never head of this?!
Fairly simple actually. Our normal maintenance and server build schedule includes swapping CPU thermal compounds, we use the best on the market we can buy in larger tubes (Arctic MX-4 is avail on 20G tubes) from our usual vendors. We also remove the 2nd CPU because Seedbox, no need for that CPU power. However, the chassis fans set their speed according to CPU temps only, and even as high as 55C CPU temp reading (100% load) does not increase the fan speeds. The fan curves cannot be modified. Meanwhile, the chipset has undersized heatsinks and thermal compound you cannot even call a compound, they have solidified completely and has become so hard it is actually very hard to even remove.

Alternatively, it could be that the fan speeds is controlled by the chassis electronics and thus CPU temp did not affect that. Since we run quite low ambient (21-27C everywhere on the DC, averaging around 23C) the fans mostly "idle" (4800rpm) with cold/hot aisle setup we have.

Thus the chipset overheats, and as remote management has to go via the chipset, which is under thermal protection unable to route signals ...

Solutions, Solutions!
Replacing the chipset compound is either a long process, or a bit of a janky solution. Just adding 40mm fans on the northbridge and southbridge gets temps down by 30C, and they screw right in place, and infact looks quite professional, almost like it could be from factory. Using 120mm fans is a bit janky but cools a lot of other smaller chips too. However each fan has to be modded a little bit to fit, and they sit ever so slightly higher so barely gets any intake air, and partically blocks a lot of the airflow through the case.

The janky solution is just to cut the plastic studs holding it on the motherboard, clean everything up, a dab of MX-4 of on the chip, then glue thermal compound all around, and finillay hot glue to replace the studs. Very janky, a little bit like "arts & crafts" but this we can do fairly quickly. The proper solution would be to remove the motherboard completely from chassis, use pliers to open the now very brittle studs, clean up, put in proper thermal compound, and hope the studs did not break and still holds, or alternatively replace with larger, copper heatsinks, but taller than 11mm chipset heatsinks with the right mount is a bit hard find, even from china. We found some 40x40x11mm COPPER heatsinks from China but they are rather expensive solution, and will take way too many months to arrive.

Replacing compounds + 40mm fans is a little bit hard combination without waiting for hours for the thermal compound glue to solidify, and as we know hot glue is not exactly too stiff; Thus we would prefer for the thermal compound glue to solidify fully before mounting 40mm fans and closing the lid. Issue is the cabling, without the glue in place they are stiff enough it could potentially unseat the heatsink as we need to stretch the cabling to the fan header connectors.

Fortunately this motherboard ships with 2 unused fan headers :)

Why did we not notice chipset temps earlier?
We had noticed years ago that the chipsets run a bit high temp, but we've never seen chipset overheating, had no issues and trusted the motherboard manufacturer to know what they are doing, and that it must have very high thermal limit etc.
On top of that, when servers crash remote management is not always available, and there was no alerts for chipset temperature! The few results remain below alert threshold of 98C.

However we did also notice during this past week that the temperature readings are about 10-15C off for CPU at the very least.

Early test results?
Let's hope that one server was an anomaly! On the others we even had hints of increased performance after adding the active cooling. All but one test server has remained stable so far, it could be something silly like forgetting to plug in the fan power wire or something, during long days mistakes do happen.

Alternative solution?
We are already testing a new server model which looks quite promising, but there is a lot of steps to qualification. Unfortunately, we already found a design issue with that chassis which will take time to solve, and some configuration issues. Once we have figured it all out, it is easy and fast to start rolling them out.

Want an replacement server?
If you know your server is affected by this and you want replacement ASAP, contact support and we will arrange it to you promptly. If you do not need data migration there is extra service days to be had.
Due to us being swamped with all these fixes right now, be prepared to wait as long as 24hours for response. Most tickets still get sorted out in less than 12hrs tho.


EDIT: Couple more announcements relating to this:
0 servers down
1 server down

]]>
<![CDATA[Electrical issue at DC on one of the racks **RESOLVED]]> https://pulsedmedia.com/clients/index.php/announcements/429 https://pulsedmedia.com/clients/index.php/announcements/429 Wed, 20 Jun 2018 10:31:00 +0000

**RESOLVED**

Issue was with one of the power distribution modules. These have built-in heat based "fuses" (thermal fuse). PDM being situation very near the server exhaust does not exactly help the case, and significantly derates the rating, so the thermal fuse triggered despite functioning now for nearly 2½ years at this load level. Load level which is nowhere near the thermal fuse rating as well, and stats show this to be the case with plentifull headroom.

It seems this was just one of those cases, we use high quality and super expensive DC PDMs (Rittal) so it should be able to handle 24/7 upto it's rating, but maybe this was just one of those odd glitches that happens every now and then. Just to be certain we moved several servers from this PDM to another PDM.

]]>
<![CDATA[Bitcoin payments via Coinbase replaced with Coinpayments.net]]> https://pulsedmedia.com/clients/index.php/announcements/428 https://pulsedmedia.com/clients/index.php/announcements/428 Wed, 09 May 2018 16:35:00 +0000
Coinpayments.net is the current recommended gateway, you can use many other cryptos as well than just Bitcoin with them.

We do not intend to support Coinbase in future, due to their tactics of spamming the Bitcoin network, refusing to implement SegWit and promotion of BCash due to issues they themselves created.]]>
<![CDATA[New AC units installed]]> https://pulsedmedia.com/clients/index.php/announcements/427 https://pulsedmedia.com/clients/index.php/announcements/427 Mon, 30 Apr 2018 12:28:00 +0000
At this very moment as i am writing this, the UPS units have as much overhead as our entire cooling system is using. Simply incredible! At this moment our PUE readings are back to what they were at during the winter, roughly 1.15 which includes the UPS unit overheads (all overhead is included in PUE reading, but usually the major contributor is cooling).

This makes our DC operations even greener than before as we have managed to make great strides in increasing power utilization efficiency in the past 2 years.]]>
<![CDATA[M1000 Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/426 https://pulsedmedia.com/clients/index.php/announcements/426 Sun, 01 Apr 2018 10:05:00 +0000
We expect to have more available roughly around Wednesday-Thursday.

In the meantime, check the great Value250 series offers out :)]]>
<![CDATA[Helsinki DC Outage: Fiber cut, RESOLVED: 18:29]]> https://pulsedmedia.com/clients/index.php/announcements/425 https://pulsedmedia.com/clients/index.php/announcements/425 Mon, 05 Mar 2018 16:35:00 +0000 Helsinki DC is experiencing an outage is 15:54 local time due to a fiber being removed from one of the fiber cross connect rooms.

This happened due to 3rd party outdated and incorrect documentation as they were doing maintenance, and thus accidentally removed incorrect unused fibers. A simple human error.

This is being fixed right now, ETA should be fixed roughly before 18:45 local time. There will be some documentation updates as a result to avoid this from happening again.


Sorry for the delay in repair.


**Issue Resolved

]]>
<![CDATA[Outage on several nodes (UPDATE 12:46)]]> https://pulsedmedia.com/clients/index.php/announcements/424 https://pulsedmedia.com/clients/index.php/announcements/424 Fri, 10 Nov 2017 11:31:00 +0000
Expected ETA several hours.
Number of concerned servers is under 20.

UPDATE 12:41: Issue is with power distribution. Power distribution unit or module.
UPDATE 12:46: All nodes affected should be booting now.]]>
<![CDATA[Helsinki DC: Multiple nodes down]]> https://pulsedmedia.com/clients/index.php/announcements/423 https://pulsedmedia.com/clients/index.php/announcements/423 Tue, 18 Jul 2017 13:23:00 +0000
We are sorting this out now and getting servers back online.

EDIT: All nodes online, no issues noted. Please contact support if any questions etc.]]>
<![CDATA[Helsinki DC network maintenance **UPDATE 2**]]> https://pulsedmedia.com/clients/index.php/announcements/422 https://pulsedmedia.com/clients/index.php/announcements/422 Thu, 04 May 2017 14:08:00 +0000
Expected downtime is short

UPDATE 05/05/2017 14:52 GMT: Some of the maintenance was moved for today. May have short downtimes upto 30minutes.
UPDATE 05/05/2017 17:58 GMT: Maintenance done.]]>
<![CDATA[Debian 8: OpenVPN & NzbDrone fixed]]> https://pulsedmedia.com/clients/index.php/announcements/421 https://pulsedmedia.com/clients/index.php/announcements/421 Sat, 25 Mar 2017 14:31:00 +0000 OpenVPN now installs, configures and runs on Debian 8 based servers.

NzbDrone installation issues has been fixed for Deb8 based servers now.

Usual rolling updates are taking in place. If you want your server to skip ahead on queue, please contact support.

]]>
<![CDATA[Helsinki DC Brownout. ]]> https://pulsedmedia.com/clients/index.php/announcements/420 https://pulsedmedia.com/clients/index.php/announcements/420 Wed, 15 Mar 2017 00:53:00 +0000
We are working to bring all downed servers back online right now, and checking servers as we go.

This is the first brownout in our history of having our own datacenter. There is a small storm on going and that might have caused distribution issues, but any issues are extremely rare at Finland's capital city.

EDIT: UPS logs confirms brownout. Electricity cut down, restored to low voltage, and eventually to full voltage all within single second.]]>
<![CDATA[Have an French dedicated server or MDS and it's down?]]> https://pulsedmedia.com/clients/index.php/announcements/419 https://pulsedmedia.com/clients/index.php/announcements/419 Thu, 02 Feb 2017 16:30:00 +0000 Please contact support immediately. Especially if you have not received an e-mail so far.

There is an emergency relating to these services, and we are handling it as quickly as we possibly can and sorting things out.

Affected services are following series if your server is located in France and setup/upgraded/migraded to in the past 2 years:

MDS Series
PDS Series
Storage Server Series

]]>
<![CDATA[Bitcoin discount increased]]> https://pulsedmedia.com/clients/index.php/announcements/418 https://pulsedmedia.com/clients/index.php/announcements/418 Sun, 11 Dec 2016 10:30:00 +0000
We have noticed a sharp increase in transaction fees, and we want to give savings of the transaction fee directly to the user! The overall transaction cost has increased more than 20% in the past couple of years. In other words, it is very very expensive to handle transactions other than Bitcoin.

So take advantage of this discount, and start using Bitcoin payments via Coinbase! :)]]>
<![CDATA[Debian 8 aka Jessie]]> https://pulsedmedia.com/clients/index.php/announcements/417 https://pulsedmedia.com/clients/index.php/announcements/417 Sat, 01 Oct 2016 20:11:00 +0000 We have several servers now running Debian 8.

As expected there was several regression in moving to Debian 8, mostly regarding maintaining servers. Otherwise all but OpenVPN has been resolved. OpenVPN is currently not working on the first Debian 8 systems, but will be fixed soon as the remainder of maintenance related aspects have been sorted out.

All new systems will be utilizing Debian 8 from now on.

]]>
<![CDATA[Bonus disk quota wiki page updated]]> https://pulsedmedia.com/clients/index.php/announcements/416 https://pulsedmedia.com/clients/index.php/announcements/416 Thu, 15 Sept 2016 16:20:00 +0000 http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Free_Bonus_Disk_Policy

If this concerns you, please read the updated page which clarifies the terms and rules of the system.

]]>
<![CDATA[SSD Stock Status: Drives are in order]]> https://pulsedmedia.com/clients/index.php/announcements/415 https://pulsedmedia.com/clients/index.php/announcements/415 Sat, 20 Aug 2016 12:12:00 +0000 We are waiting for delivery of new drives, ETA to have stock available again is within 3 weeks. Best Case scenario 1 week.

]]>
<![CDATA[SSD Base traffic amounts increased]]> https://pulsedmedia.com/clients/index.php/announcements/414 https://pulsedmedia.com/clients/index.php/announcements/414 Thu, 14 Jul 2016 14:47:00 +0000 SSD series traffic limits has been increased significantly.

40G: 2TB -> 3TB
200G: 10TB -> 15TB
600G: 30TB -> 45TB

These will be enabled for all existing users automatically over time.

We also introduced 100G option.

]]>
<![CDATA[New Bitcoin payment gateway: Coinbase PLUS 3% Discount!]]> https://pulsedmedia.com/clients/index.php/announcements/413 https://pulsedmedia.com/clients/index.php/announcements/413 Tue, 28 Jun 2016 09:58:00 +0000
Additionally, we are now providing 3% discount when paid using Bitcoin! :)

The integration is still in testing; So if you notice anything out of place, please contact support immediately.]]>
<![CDATA[SSD Extra traffic prices SIGNIFICANTLY lowered - Now as low as 0.45€ per 1000 GiB!]]> https://pulsedmedia.com/clients/index.php/announcements/412 https://pulsedmedia.com/clients/index.php/announcements/412 Sat, 25 Jun 2016 16:48:00 +0000 We have just lowered the price for SSD series extra traffic by a lot. Now the largest is a whopping 100 000 GiB (~100TB) for just 44.99€ per month!

Now that is some very cheap bandwidth!

Got small amounts of data you need to get in hordes out there? What better choice than our SSD Seedboxes.

]]>
<![CDATA[Bit Pay has suspended our account: No Bitcoin payments temporarily]]> https://pulsedmedia.com/clients/index.php/announcements/411 https://pulsedmedia.com/clients/index.php/announcements/411 Sat, 25 Jun 2016 14:17:00 +0000
We asked about the verification documents to increase our limits, which they make extremely hard to produce btw, and their response was to lower as to "Tier 0", which limits us to 100$ day / 500$ year bracket. Not sure if even new signups go that low. This is not the first time, we did try to go through their hoops a long time ago, only to stumble on the issues below.

We had done many of their verifications, but our assumption is that they looked at our website and what we sell. As a result they moved to do this with barely any warning what so ever, for the simple reason of requesting more information on this process.

The issue we are having, they already have our company details, but they also want personal social security numbers in the form of passport photos (driver's license apaprently is no good). They do not seem to accept photos, only scanned documents(?).

Second they want chamber of commerce registration papers. Finland does not use this document anymore, all of that is online for the whole public to see. It is not even sent by default when you form a company. Instead you have to order it, takes a few weeks to receive and costs a few hundred euros. As far as we know also want it in english and notarized. 

Third they want utility bill for the company address, which needs to be in English. This is pretty much impossible to produce as being in Finland, all such invoices would be in Finnish.

We are looking into stripe, but the developer of the checkout module is not responding to e-mails for some reason.

]]>
<![CDATA[Billing portal updated]]> https://pulsedmedia.com/clients/index.php/announcements/410 https://pulsedmedia.com/clients/index.php/announcements/410 Tue, 14 Jun 2016 23:07:00 +0000
If you see any issues or faults, please contact support and we'll look into it.]]>
<![CDATA[SSD Series backups update: More frequent]]> https://pulsedmedia.com/clients/index.php/announcements/409 https://pulsedmedia.com/clients/index.php/announcements/409 Tue, 14 Jun 2016 12:49:00 +0000
This will make sure that when the inevitable disk failure happens the data is as fresh as possible.

We are currently looking at possibility of incremental backups, but storage wise that might turn out to be too costly.]]>
<![CDATA[Support records being broken!]]> https://pulsedmedia.com/clients/index.php/announcements/408 https://pulsedmedia.com/clients/index.php/announcements/408 Thu, 02 Jun 2016 11:31:00 +0000 In terms of support services we had a record breaking month!

Measured since 2013/01 we had both the best response average time and closure time average so far!

Excellent! :)

]]>
<![CDATA[Network migration COMPLETE, new servers]]> https://pulsedmedia.com/clients/index.php/announcements/407 https://pulsedmedia.com/clients/index.php/announcements/407 Wed, 01 Jun 2016 12:07:00 +0000 We have completed the transition out of Cogent to a new mix of Level3, TeliaSonera and RETN, with our own IPs etc.

Speeds have greatly increased as a result, we are seeing some servers now doing more than double what they used to.

]]>
<![CDATA[New transit migration almost complete]]> https://pulsedmedia.com/clients/index.php/announcements/406 https://pulsedmedia.com/clients/index.php/announcements/406 Sun, 29 May 2016 11:13:00 +0000
Cogent connection & IPs will be shutdown late tuesday night or early wednesday.


]]>
<![CDATA[Helsinki: New transit]]> https://pulsedmedia.com/clients/index.php/announcements/405 https://pulsedmedia.com/clients/index.php/announcements/405 Tue, 10 May 2016 10:53:00 +0000
We will need to change IPs of every node once this link is up & running, DNS will be changed to reflect, only difference is if you have been using direct IP access.

We will inform server by server. This process mainly begins tomorrow, today only a couple test nodes.

]]>
<![CDATA[SSD Series update: Triple the upload slots]]> https://pulsedmedia.com/clients/index.php/announcements/404 https://pulsedmedia.com/clients/index.php/announcements/404 Thu, 21 Apr 2016 12:54:00 +0000
This should make SSD server performance more evident as the changes get applied over time. All new accounts will get the new settings, for rest it is enabled over time slowly.

This means a standard 40G slot will get 54 upload slots in their disposal and 200G slot 216 slots per each torrent. Maximum global limits at 40G 324 and 200G 1296 global total.]]>
<![CDATA[Under attack: WPress Pingback reflection attack]]> https://pulsedmedia.com/clients/index.php/announcements/403 https://pulsedmedia.com/clients/index.php/announcements/403 Sat, 26 Mar 2016 11:42:00 +0000
This caused intermittent issues as we worked to optimize things in the backend. Things are now functioning faster than ever despite attack still continuing.
We still might need to do some more work to optimize things and get the traffic flow off the web server.

We are still checking why it seems there is traffic amplifcation going on as well.

We will continue monitoring the situation and adjust things as needed.

This type of attack is known and here are some further information that came up with googling:
https://isc.sans.edu/forums/diary/Wordpress+Pingback+DDoS+Attacks/17801
https://blog.sucuri.net/2014/03/more-than-162000-wordpress-sites-used-for-distributed-denial-of-service-attack.html
https://wordpress.org/support/topic/warning-xmlrpc-wordpress-exploit-ddos
https://www.trustwave.com/Resources/SpiderLabs-Blog/WordPress-XML-RPC-PingBack-Vulnerability-Analysis/

We have already traced this attack to NForce network. This has been reported to them. The attack seams to be leaning off slowly and volume has dropped by approximately 40% from peak.]]>
<![CDATA[Helsinki: Transit capacity maxed out]]> https://pulsedmedia.com/clients/index.php/announcements/402 https://pulsedmedia.com/clients/index.php/announcements/402 Fri, 25 Mar 2016 20:39:00 +0000
So sorry we could not anticipate the required level correctly for this month.]]>
<![CDATA[Espoo to Helsinki Datacenter Move: Finished (nearly)]]> https://pulsedmedia.com/clients/index.php/announcements/401 https://pulsedmedia.com/clients/index.php/announcements/401 Mon, 21 Mar 2016 21:10:00 +0000
]]>
<![CDATA[Espoo datacenter move: Going ahead on schedule.]]> https://pulsedmedia.com/clients/index.php/announcements/400 https://pulsedmedia.com/clients/index.php/announcements/400 Mon, 21 Mar 2016 08:46:00 +0000
Expecting to have all servers online by end of the day.]]>
<![CDATA[Espoo datacenter move schedule]]> https://pulsedmedia.com/clients/index.php/announcements/399 https://pulsedmedia.com/clients/index.php/announcements/399 Sat, 19 Mar 2016 11:48:00 +0000
Several nodes will be moved upfront around 09:00 GMT, final tests will be done around 10:00 GMT at which time rest off the nodes will be shutdown at Espoo before move to Helsinki.
Expecting nodes to start coming online roughly around 16:00-17:00 GMT until finishing around 22:00 GMT.

]]>
<![CDATA[Espoo: Transit upgraded]]> https://pulsedmedia.com/clients/index.php/announcements/398 https://pulsedmedia.com/clients/index.php/announcements/398 Tue, 08 Mar 2016 19:52:00 +0000
Looks like this is not sufficient however, we will do another upgrade 1st of April.]]>
<![CDATA[Espoo: Limited transit availability, fix is coming]]> https://pulsedmedia.com/clients/index.php/announcements/397 https://pulsedmedia.com/clients/index.php/announcements/397 Sun, 21 Feb 2016 00:56:00 +0000
The result is that Espoo is right now a bit limited on transit capability as we are forced to limit the total throughput to avoid hefty burst datarate fees. Fix is coming 1st of March with a big increase on transit capability.

We are also preparing for the next network upgrade soon.]]>
<![CDATA[Bug with bonus disk quota + extra disk combo, fixed]]> https://pulsedmedia.com/clients/index.php/announcements/396 https://pulsedmedia.com/clients/index.php/announcements/396 Fri, 19 Feb 2016 02:18:00 +0000
This bug has been now fixed.
If you have encountered this, please open a ticket and we'll quickly get it handled for you :)]]>
<![CDATA[The sadness of the ST3000DM001]]> https://pulsedmedia.com/clients/index.php/announcements/395 https://pulsedmedia.com/clients/index.php/announcements/395 Thu, 11 Feb 2016 14:29:00 +0000
Their failure data matches almost exactly ours, except we purchased our drives in Q1-Q4 of 2013 instead of 2012.

Seagate has since updated this drive model, the part number has changed, along with serial number. We do not believe these exhibit as high failure rate as the earlier ones, but do not have data to correlate this.

Today, we still have some of these drives in production, well quite a few after a visit to warranty replacement, but these are mostly RAID5 arrays, and almost always there is a toshiba or hitachi drive or two in the mix (yes, you can mix brands & models in a software raid!). Failure rates has plummeted, there's only a couple failing each month on average at this time. We are largely now running on other than ST3000DM001 drives (Toshiba, Hitachi, and 8TB SMR Seagates)

Check out the backblaze post at: https://www.backblaze.com/blog/3tb-hard-drive-failure/

]]>
<![CDATA[Espoo DC switch upgrade in progress]]> https://pulsedmedia.com/clients/index.php/announcements/394 https://pulsedmedia.com/clients/index.php/announcements/394 Sat, 30 Jan 2016 04:43:00 +0000
The main work has been completed now, but with increased port count we are going to connect more servers directly to edge; This will cause intermittent outages lasting upto 1 minute on few of the Espoo nodes.]]>
<![CDATA[DNS Resolver updates]]> https://pulsedmedia.com/clients/index.php/announcements/393 https://pulsedmedia.com/clients/index.php/announcements/393 Sun, 24 Jan 2016 14:13:00 +0000
We have begun to replace those resolvers with OpenDNS resolvers, and every tracker should resolve by tomorrow normally.]]>
<![CDATA[Entry20 available]]> https://pulsedmedia.com/clients/index.php/announcements/392 https://pulsedmedia.com/clients/index.php/announcements/392 Mon, 18 Jan 2016 01:38:00 +0000
Order here and remember to grab extras if you need them! :)

]]>
<![CDATA[Upgrading a seedbox: Customize any shared seedbox offer! :)]]> https://pulsedmedia.com/clients/index.php/announcements/391 https://pulsedmedia.com/clients/index.php/announcements/391 Mon, 11 Jan 2016 20:46:00 +0000
You can change the amount of Disk space, Ram and Traffic on all off the services, including older series now.

On the higher end, the prices are extremely cost effective as well, often making it cheaper to just upgrade disk if you need more disk but not more traffic.

This also allows things like 900GiB SSD service, 4050GiB (~4½TB) Dragon, 9000GiB (~10TB !!) Storage series offer! :)
One of the amazing combos is SSD 40GiB + 300GiB Storage addon, paid triennially gives you 340GiB SSD Seedbox for just 13.05€ per month!! How Sweet is That?

If you need larger than maximum extra disk space, ram or traffic, please contact support and we'll make it an option if it is possible.

To add extras
Go to My Services page, choose view details on the service which you want to upgrade, and under management options there is 'Upgrade/Downgrade'.

]]>
<![CDATA[Continuous network measuring]]> https://pulsedmedia.com/clients/index.php/announcements/390 https://pulsedmedia.com/clients/index.php/announcements/390 Thu, 07 Jan 2016 19:43:00 +0000
More will be implemented over the course of January. We'll be doing an lengthier blog posting as well.

]]>
<![CDATA[Game of whack a mole and SPAM]]> https://pulsedmedia.com/clients/index.php/announcements/389 https://pulsedmedia.com/clients/index.php/announcements/389 Fri, 18 Dec 2015 03:15:00 +0000
This has transformed into a game of whack a mole - yesterday we had to disable 2 of those because of Hotmail/Live blocking these whole subnets.
We use inexpensive VPS services for these, and those subnets has had a spammer, hence those 2 had to be disabled.
This provider does not give chance to swap IPs, so it's time to close those 2 and replace them with 2 new ones from elsewhere. This is one of the larger providers.

A month and half ago we had to request another provider to swap our IP instantly after setup for this same reason.

It's time to get yet another few virtual servers setup it seems.

We recommend you find an alternative free e-mail service than hotmail. Today we still allow signup using hotmail or live e-mail accounts, but that may not be so forever.

As usual, you can check your full e-mail history with us at: https://pulsedmedia.com/clients/clientarea.php?action=emails


Spam

It has come to our attention that some spammers are linking their "unsubscribe" button to a URL which describes us. Nasty Nasty!

If you have received such an spam, please use the "show original" feature of your e-mail client and copy paste all of that text to http://pastebin.com/ and let us know via a ticket.]]>
<![CDATA[Espoo power outage UPDATE 2]]> https://pulsedmedia.com/clients/index.php/announcements/388 https://pulsedmedia.com/clients/index.php/announcements/388 Sat, 28 Nov 2015 15:50:00 +0000
Contractor is doing some electrical work - they did promise there would be NO power outage, and hence didn't even inform us a specific time a power outage could happen.

As we can see, they failed on this.

We will be on site to check out everything is in working order - a subsection of servers did not boot automatically.
We run a completely lights out facility so there isn't normally a person on standby and as of yet we are unsure what has happened. We are hoping it is just a blown fuse due to sudden spike on load, tho our servers are setup for staggered boot.

UPDATE 1
Couple switches had their automatic fuses blown - easy fix.
Several servers apparently simply crashed.

The reason for outage was that they had to move the whole building to a generator, they had matches the phases so that they could feed from the generator & mains simultaneously, but apparently forgot to increase the RPM of the generator upfront and the peak was simply too large for the generator to ramp up fast enough.

Move back to main power went smoothly and without an issue.

We are now checking which servers are down and manually rebooting and checking them out.

UPDATE 2
Everything checks out and all nodes should be up & running normally.]]>
<![CDATA[FREE Seedboxes!]]> https://pulsedmedia.com/clients/index.php/announcements/387 https://pulsedmedia.com/clients/index.php/announcements/387 Sun, 15 Nov 2015 16:53:00 +0000 Signup at: http://pulsedmedia.com/free-seedbox.php

]]>
<![CDATA[Espoo Network Upgrades]]> https://pulsedmedia.com/clients/index.php/announcements/386 https://pulsedmedia.com/clients/index.php/announcements/386 Wed, 04 Nov 2015 19:17:00 +0000 Our edge router will be replaced by an very high end model to support the growth of data traffic.

This will cause intermittent network downtime for an estimated period of 30minutes, between 19:30 GMT to 23:00 GMT]]>
<![CDATA[Have an Hotmail / Outlook / Live e-mail users - Read this]]> https://pulsedmedia.com/clients/index.php/announcements/385 https://pulsedmedia.com/clients/index.php/announcements/385 Sat, 24 Oct 2015 19:41:00 +0000 This time they atleast bothered to let us know.

However, now the silent e-mail dropping has resumed. Microsoft is The King of silently dropping e-mail.
There are no bounces, there are no warnings, they don't go to "spam" or "junk" folder neither. They are simply dropped.

Our logs read that delivery was fine, but these never come to Your inbox.

Our recommendation is that you change your e-mail provider to non-microsoft e-mail service if you are currently using Microsoft provided e-mail.]]>
<![CDATA[E-mail delivery issues resolved]]> https://pulsedmedia.com/clients/index.php/announcements/384 https://pulsedmedia.com/clients/index.php/announcements/384 Mon, 19 Oct 2015 14:55:00 +0000 The e-mail delivery issues has been resolved for now. Only time will tell how much gets blocked by the new IPs.

What we did was acquire a couple of inexpensive virtual servers and we are now relaying e-mail via them.

Logs tell us the e-mail not delivered earlier today has been now delivered - but due to the habit of silently dropping e-mail we have no means to verify all of the recipients has been reached :(

Please, let us know if you were expecting e-mail from us and it did not reach You.

Founder of Pulsed Media:
E-mail as a protocol and means of communication is in a very sad state when small businesses whole existence is being threatened by anti-spam actions. These issues has been an on-going dilemna for 8 months now, when our outgoing e-mail IP address changed. This caused a 15% dip in revenue for that particular month. That, is a lot of lost revenue. The Fuel of any business.

Spammers continue on spamming and acquiring new IPs, meanwhile whole subnets are getting blocked by anti-spam measures. Meanwhile, small businesses like ours pay dearly from the delivery issues of legitimate, opt-in e-mail and even sending out invoices! The real question remains - How many false positives there really are - when these big players are simply silently dropping e-mail and only way to confirm false positives are when a customer complains they did not receive their invoice / login details?

We will continue hard to make sure our e-mail is delivered - even if it means adding dozens of IPs and hosts to make certain that happens!

]]>
<![CDATA[*SEVERE* e-mail delivery issues]]> https://pulsedmedia.com/clients/index.php/announcements/383 https://pulsedmedia.com/clients/index.php/announcements/383 Mon, 19 Oct 2015 11:53:00 +0000 "host mx4.hotmail.com [134.170.2.199]: 550 SC-001 (BLU004-MC1F22) Unfortunately, messages from 188.165.244.154 weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors."

All e-mail going towards any hotmail / live / outlook e-mail server is being blocked.
This is due to a neighboring IP has been spamming.

We believe this spamming has been ongoing at least since february when we moved onto using this server - we have ever since had severe e-mail delivery issues. Despite the fact that our e-mail scores beyond 10/10 from e-mail testing services! 

We implemented everything we could think of, contacted all the biggest causes for concern, checked all the possible blacklists we could find for any clues.
The good news is, this time there is atleast an error message. The issue used to be that the e-mail was seemingly accepted, but silently dropped and never delivered to the user in question.

What does this mean?


We are forced to change our outgoing e-mail server in short order - this will cause another myriad of delivery issues. Big e-mail providers like Hotmail don't like at all new senders, first month will have severely limited reachability. We aren't even sure if we should e-mail our monthly newsletter next month! We had severe issues getting it delivered this month as well.

However, we are working on it, and will make a solution happen as quickly as possible.

In the meantime, you can always check out your e-mail log by logging into the billing portal and under account submenu there is a mail history link, direct link: https://pulsedmedia.com/clients/clientarea.php?action=emails

]]>
<![CDATA[Espoo short network maintenance]]> https://pulsedmedia.com/clients/index.php/announcements/382 https://pulsedmedia.com/clients/index.php/announcements/382 Sat, 17 Oct 2015 18:24:00 +0000 This will cause a short downtime on parts of the network.

Expected total interruptions time is less than 30minutes.]]>
<![CDATA[OpenVPN is here!]]> https://pulsedmedia.com/clients/index.php/announcements/381 https://pulsedmedia.com/clients/index.php/announcements/381 Mon, 12 Oct 2015 11:31:00 +0000 OpenVPN support has been added to the software stack. This feature is in Beta right now!


If your server has been updated your welcome page will have download link to the config package and very short windows install guide.
If you'd like your server to be updated immediately, please contact support.

]]>
<![CDATA[ruTorrent Updated]]> https://pulsedmedia.com/clients/index.php/announcements/380 https://pulsedmedia.com/clients/index.php/announcements/380 Sun, 11 Oct 2015 11:27:00 +0000
Slow rollout as usual to make sure there is no widespread issues.

A random selection of servers will be updated shortly to start the rollout process.]]>
<![CDATA[MKVToolNix + Flexget]]> https://pulsedmedia.com/clients/index.php/announcements/379 https://pulsedmedia.com/clients/index.php/announcements/379 Sun, 11 Oct 2015 09:36:00 +0000 http://flexget.com/
Flexget is a multi-purpose automation tool for content like torrents, podcasts, comics, nzbs, etc. It can use many kinds of sources like RSS, html pages, csv files, search engines etc. and is most useful with Watch directory applications, ie. rTorrent

MKVToolnix: https://www.bunkus.org/videotools/mkvtoolnix/
A set of tools for handling matroska files.

Both have been added to the standard software set.

If these are not yet present on your server, but you'd wish to have them simply open a support ticket requesting server update.]]>
<![CDATA[Espoo network issues resolved]]> https://pulsedmedia.com/clients/index.php/announcements/378 https://pulsedmedia.com/clients/index.php/announcements/378 Wed, 07 Oct 2015 18:04:00 +0000 These were short outages, and ultimately leading to loss of 25% of network performance.

We suspected first the transit link or the SFP+ transceiver, but issue was not there.

Finally it was found on one of the Brocade FESX448-2G interior distribution switches, which admittedly has been causing some grief in the past too.

We have reverted back 1/4 of nodes from that switch to the old IDS switch, and rebooted the switch.
We will continue to monitor the switch, and ultimately if it does this again we will swap it for couple older generation switches.

]]>
<![CDATA[Dediseedbox temporarily out of stock]]> https://pulsedmedia.com/clients/index.php/announcements/377 https://pulsedmedia.com/clients/index.php/announcements/377 Sat, 03 Oct 2015 23:03:00 +0000 Situation should be resolved within the next 24hours.

New servers are already installing, pending software setup & QA.]]>
<![CDATA[Our site has been updated]]> https://pulsedmedia.com/clients/index.php/announcements/376 https://pulsedmedia.com/clients/index.php/announcements/376 Sat, 03 Oct 2015 15:54:00 +0000
We have just released our new site, which should provide better navigation and provide information in a more clear manner.
Visit http://pulsedmedia.com/

Please let us know what you think! :)]]>
<![CDATA[Maintenance of backend API]]> https://pulsedmedia.com/clients/index.php/announcements/375 https://pulsedmedia.com/clients/index.php/announcements/375 Fri, 18 Sept 2015 13:50:00 +0000
Affected are all account functions for seedboxes, new service creations, terminations, suspend/unsuspend.
These will be manually checked after API is back online.

After which there will be intermittent issues for maximum of 2 hours as we upgrade some of the security protocols on the backend.

]]>
<![CDATA[Super100 Series Refresh, upto 2700 GiB now!]]> https://pulsedmedia.com/clients/index.php/announcements/374 https://pulsedmedia.com/clients/index.php/announcements/374 Fri, 11 Sept 2015 14:39:00 +0000 Pulsed Media Super100 Series Refreshed!

We have refreshed the Super100 seedbox series to offer better value than ever before!
Even better performance than ever before is included, Super100+ 2.0 Triples the previous ram & upload slots allocation!

Good discounts for longer payment cycles available!

Here are the new plans, price listed is quarterly billing cycle:

Super20+: 500 GiB, 100Mbps/20Mbps, 7.99€ a month !
Super100 2.0: 1350 GiB, 1Gbps/100Mbps, 12.99€ a month !
Super100+ 2.0: 2700 GiB, 1Gbps/100Mbps, 24.99€ a month !

Get yours immediately! :)
View more specs at: http://pulsedmedia.com/super100-2.0-seedbox.php]]>
<![CDATA[Super100 series stock status]]> https://pulsedmedia.com/clients/index.php/announcements/373 https://pulsedmedia.com/clients/index.php/announcements/373 Tue, 08 Sept 2015 20:52:00 +0000 This should be fixed in the next couple of hours.

Sorry for the inconvenience.]]>
<![CDATA[Today's billing downtime of ~2½ hrs]]> https://pulsedmedia.com/clients/index.php/announcements/372 https://pulsedmedia.com/clients/index.php/announcements/372 Sun, 06 Sept 2015 12:20:00 +0000 Fortunately, fix was quick and everything is back to operational state.]]> <![CDATA[ownCloud rollout to existing users delayed due to software bug]]> https://pulsedmedia.com/clients/index.php/announcements/371 https://pulsedmedia.com/clients/index.php/announcements/371 Mon, 24 Aug 2015 12:32:00 +0000 Hence the setup files are missing for existing users despite updates ran for the server.

Patch is coming as soon as we can put aside some development time and a normal slow rollout will ensue.

This does not affect new user accounts.]]>
<![CDATA[Espoo: New internal nameservers]]> https://pulsedmedia.com/clients/index.php/announcements/370 https://pulsedmedia.com/clients/index.php/announcements/370 Sun, 16 Aug 2015 18:58:00 +0000 Not anymore - We added our own nameservers now.
You may update to those and enjoy the less than 1ms latency :)

IPs are:
149.5.241.10
149.5.241.11

It's as easy as editing the /etc/resolv.conf file :)

Please note, these are still in testing.]]>
<![CDATA[Espoo network maintenance **Finished]]> https://pulsedmedia.com/clients/index.php/announcements/369 https://pulsedmedia.com/clients/index.php/announcements/369 Thu, 13 Aug 2015 20:06:00 +0000 This announcement will be updated once over :)

Finished
Downtime was only a couple of minutes :)

]]>
<![CDATA[ownCloud coming to a server near you!]]> https://pulsedmedia.com/clients/index.php/announcements/368 https://pulsedmedia.com/clients/index.php/announcements/368 Tue, 11 Aug 2015 15:11:00 +0000
Many servers have already been updated with the new code. You can see this for yourself if setup-owncloud.php exists under www directory by accessing http://servername.pulsedmedia.com/user-username/setup-owncloud.php
Visit setup-owncloud-finish.php after that to setup symlinks to access data directory AND public/owncloud directory access.]]>
<![CDATA[Bitpay suspension lifted]]> https://pulsedmedia.com/clients/index.php/announcements/367 https://pulsedmedia.com/clients/index.php/announcements/367 Tue, 21 Jul 2015 23:01:00 +0000
]]>
<![CDATA[Bitcoin via Bitpay payments temporarily suspended]]> https://pulsedmedia.com/clients/index.php/announcements/366 https://pulsedmedia.com/clients/index.php/announcements/366 Sun, 19 Jul 2015 14:16:00 +0000
They did NOT reach to us before hand, and went straight to suspension with just an e-mail sent to us.
Apparently, our website does not comply with their rules - whatever they may be. We've been using bitpay for a long time to facilitate bitcoin payments, so this is odd.

This results in bitcoin payment page not loading after clicking the button for bitcoin payment.

We have reached out to Bitpay to get things sorted out. ETA is unknown, and the compliance requirements are unknown.

It is a sad day to see something like this happen with a Bitcoin payment services provider, since one of the biggest benefits of Bitcoin is supposed to be that no one can halt your payments.

]]>
<![CDATA[Bonus disk quota bug fixed]]> https://pulsedmedia.com/clients/index.php/announcements/365 https://pulsedmedia.com/clients/index.php/announcements/365 Fri, 10 Jul 2015 13:01:00 +0000 This came about in the last update to 50GiB per time limitation, so it only was happening for a few days, and only for a few users.

]]>
<![CDATA[Bonus disk quote single application change]]> https://pulsedmedia.com/clients/index.php/announcements/364 https://pulsedmedia.com/clients/index.php/announcements/364 Sun, 28 Jun 2015 13:18:00 +0000 This is because some people would get up to a few TiB at once, and others on the server would not get anything.

This makes the increases more smooth and you will receive more stably over time, and everyone on the particular server gets their fair share :)

]]>
<![CDATA[Stats 25/6]]> https://pulsedmedia.com/clients/index.php/announcements/363 https://pulsedmedia.com/clients/index.php/announcements/363 Thu, 25 Jun 2015 00:15:00 +0000
FREE Disk space: 234 532GiB (-8 548GiB)
Active Torrents: 86 382 (+3 224)
FREE Bonus disk space: 136 201GiB (+31 443GiB)

Since end of April we have almost doubled the amount of FREE bonus disk space given to our users!

]]>
<![CDATA[Downtime: DOS attack]]> https://pulsedmedia.com/clients/index.php/announcements/362 https://pulsedmedia.com/clients/index.php/announcements/362 Tue, 16 Jun 2015 11:05:00 +0000 The DC which hosts our web site & dns blocked our server in order to protect their network.

Only our website and DNS were affected.
We are still investigating this incident, but have already taken steps to minimize future impact if this kind of attack happens again.
Total downtime was less than 6 hours.

]]>
<![CDATA[Bonus disk quota algorithm simplifying]]> https://pulsedmedia.com/clients/index.php/announcements/361 https://pulsedmedia.com/clients/index.php/announcements/361 Mon, 08 Jun 2015 11:56:00 +0000 We slightly simplified the bonus disk quota and at the sametime increased the amounts of bonus significantly.
We had a final modifier of 70% instead of 100%. This final modifier has been removed now.

Now you get extra disk space with the following very simple settings:

Service running time: 1.2% per month (previous 0.7% per month)
Paid in sum: 1% per 50€ (previous 0.7% per 50€)
Refunds remove from this bonus: 1% per 50€ (previous 0.7% per 50€)
Notice: If you have multiple services the paid in sum modifier affects all of the services. So in a way, you get that bonus multiplied by as many seedboxes you have.


Examples
Case 1:
Super20 at 5.99€ a month, running for 12months, no previous services, no refunds.
71.88€ paid in gives 1.44% extra, 12 months gives 14.4%. Bonus diskspace: 15.84% or 36GiB (3GiB per month)

Case 2:
Super100 at 10.99€ a month prepaid for the next 6months, already running for year and half, previous services total sum 200€, no refunds.
Paid sum total: 9.27%, service running time: 21.6%, total: 30.87% or 228GiB (12.67GiB per month)

Case 3:
Super100+ at 16.99€ a month, prepaid for the next 3months, already running for 2 years. Previous services total sum 300€, 50€ refunds.
Paid in sum: 14.17%, service running time: 28.8%, total: 42.97% or 635GiB (26.46GiB per month)

Case 4 - Paid in sum "multiplier":
3xSuper100 at 9.99€ a month, prepaid for next 10 months, already running for 2 months. Previous services total sum: 300€, no refunds.
Paid in sum bonus for EVERY service: 7.20% - each service receives this for the paid in total.

]]>
<![CDATA[ESPOO Switch upgrade]]> https://pulsedmedia.com/clients/index.php/announcements/360 https://pulsedmedia.com/clients/index.php/announcements/360 Tue, 02 Jun 2015 00:08:00 +0000 It has been very successfull and we already see a dramatic increase in total bandwidth utilization - despite not expecting a big increase in total transit utilization being an IDS.

Further, this was to simply test the switch model and our edge router will be upgraded to same series in near future.
Our current edge has insufficient buffers to handle the 1Gbps<>10Gbps transitions well enough, causing slower than expected per thread transfer speeds. The new model will have 8 times as much buffering capability for transmitted data, allowing it to properly time the packets for a free dataflow.

We are hoping the new edge will sustain our bandwidth requirements for a year or more forwards.

]]>
<![CDATA[Stats updates 31/05]]> https://pulsedmedia.com/clients/index.php/announcements/359 https://pulsedmedia.com/clients/index.php/announcements/359 Sun, 31 May 2015 13:28:00 +0000
Free Disk space: 243 080GiB (+57 012GiB)
Active torrents: 83 158 (-823)
FREE BONUS Disk space given out: 104 758GiB (+18 594GiB)

Wow, almost quarter of a petabyte in just FREE unallocated disk space!]]>
<![CDATA[SSD Seedbox BETA]]> https://pulsedmedia.com/clients/index.php/announcements/358 https://pulsedmedia.com/clients/index.php/announcements/358 Thu, 14 May 2015 20:18:00 +0000
We are starting with 2 seedbox plans: 100GiB and 200GiB.
Both have 1Gbps upload + download, and 10TB or 20TB monthly traffic respectively.

We will run this for at least a few months. If we decide to cancel you will be prorata refunded the remaining time.

Get SSD Beta 100G - 100GiB Storage, 10TB Monthly Traffic - 10€ a month
Get SSD Beta 200G - 200GiB Storage, 20TB Monthly Traffic - 20€ a month]]>
<![CDATA[Stats updates 11/05]]> https://pulsedmedia.com/clients/index.php/announcements/357 https://pulsedmedia.com/clients/index.php/announcements/357 Mon, 11 May 2015 10:34:00 +0000
Average users per Super100 series server (Super100, Super100+, Storage100+ etc.): 6.6

Current active torrents: 83 981  (+3 355)
Current Free disk space: 186 068 GiB (-7 933GiB)
FREE Bonus disk space given: 86 164GiB (+8 288GiB)
]]>
<![CDATA[Bonus disk & Other Stats for end of April]]> https://pulsedmedia.com/clients/index.php/announcements/356 https://pulsedmedia.com/clients/index.php/announcements/356 Mon, 27 Apr 2015 14:57:00 +0000
Extra FREE bonus disk space given is 77 876GiB at the moment, which equals roughly 29x 3TB Disks or as RAID5 setups roughly 37x 3TB Disks.
These users have an average of 22.6% of FREE diskspace given.

Those servers in automation have 80 626 active torrents, and those servers have a total free diskspace capacity of 194 001GiB.

Changes from 8th of March stats:
+24 823 GiB bonus disk space.
+10 242 active torrents.
-5 800 GiB free disk space.


]]>
<![CDATA[Super100+ stock]]> https://pulsedmedia.com/clients/index.php/announcements/355 https://pulsedmedia.com/clients/index.php/announcements/355 Mon, 06 Apr 2015 13:12:00 +0000 5 slots are available to grab immediately.

]]>
<![CDATA[BIG Software update!]]> https://pulsedmedia.com/clients/index.php/announcements/354 https://pulsedmedia.com/clients/index.php/announcements/354 Thu, 02 Apr 2015 10:09:00 +0000 We moved to Nginx serving from userland lighttpd processes.

You can read all about it in our blog! http://blog.pulsedmedia.com/2015/04/massive-update-per-user-lighttpd-nginx/]]>
<![CDATA[Extra bonus disk - and other stats]]> https://pulsedmedia.com/clients/index.php/announcements/353 https://pulsedmedia.com/clients/index.php/announcements/353 Sun, 08 Mar 2015 18:07:00 +0000
Extra disk space given out free of charge is at the moment 53 053GiB. That is the equivalent of roughly 20x 3Tb disks!

From those servers in automation there is a total of 70 384 torrents active and those servers have a total free diskspace capacity of 199 801GiB !

These stats were taken from the bulk of Value, 2012, Super100 and Dragon servers and do not contain all services such as dediseedboxes, 2009+, 2009+ x2 series etc.

]]>
<![CDATA[Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/352 https://pulsedmedia.com/clients/index.php/announcements/352 Fri, 06 Mar 2015 07:47:00 +0000
This will be fixed in the next 2 hours by addition of a bunch of servers already being installed.

Sorry for the inconvenience.]]>
<![CDATA[Illustraded guide to setting up BTSync]]> https://pulsedmedia.com/clients/index.php/announcements/351 https://pulsedmedia.com/clients/index.php/announcements/351 Fri, 27 Feb 2015 16:46:00 +0000 Check it out at: http://blog.pulsedmedia.com/2015/02/btsync-setup-on-seedbox/]]> <![CDATA[Megatools added to the software package]]> https://pulsedmedia.com/clients/index.php/announcements/350 https://pulsedmedia.com/clients/index.php/announcements/350 Thu, 26 Feb 2015 16:58:00 +0000 Megatools has been added as well to the tools package. Megatools is a command line (CLI) utility to access the Mega.co.nz service via command line.

List of tools available:
megareg: Register & Verify a new Mega account
megadf: Shows your storage usage and quota
megafs: Mount mega storage locally
megasync: Sync a directory
megaget: download single file
megaput: upload single file
megals: list files on mega
megamkdir: create directory into the megastorage
megarm: remove file from mega
megamv: move and rename files in mega

Remember to escape special characters such as ! with a \, for example replace ! with \!

Registering with megareg tool you will benefit from having true control over your encryption keys.

]]>
<![CDATA[pyLoad added]]> https://pulsedmedia.com/clients/index.php/announcements/349 https://pulsedmedia.com/clients/index.php/announcements/349 Thu, 26 Feb 2015 15:43:00 +0000 No automatic user side configs etc. yet, but the packages etc. are there so you can configure and use it manually.]]> <![CDATA[Hotmail issues]]> https://pulsedmedia.com/clients/index.php/announcements/348 https://pulsedmedia.com/clients/index.php/announcements/348 Mon, 23 Feb 2015 01:37:00 +0000 We are trying to reach hotmail technical service to get this fixed asap.

In the meantime, it is possible that hotmail might accept e-mails from us for you by adding our primary e-mail address to your contact list:
billing@pulsedmedia.com
s
upport@pulsedmedia.com
s
ales@pulsedmedia.com

Also check your spam/junk folder, blocked senders list and spam mail settings if you are expecting e-mail from us but not receiving it.

We are sorry for this situation and we are trying to find a solution to quickly have this fixed.]]>
<![CDATA[Support ticket and other emails]]> https://pulsedmedia.com/clients/index.php/announcements/347 https://pulsedmedia.com/clients/index.php/announcements/347 Sat, 07 Feb 2015 16:05:00 +0000 Thus if you have a support ticket and expecting a reply you might want to check via portal if a reply was made in the past 30-45minutes.]]> <![CDATA[Web server upgrade ** COMPLETED]]> https://pulsedmedia.com/clients/index.php/announcements/346 https://pulsedmedia.com/clients/index.php/announcements/346 Sat, 07 Feb 2015 14:50:00 +0000
We will enter the billing system into maintenance mode. DNS will be updated as the new server has new IP.
You may need to refresh your DNS for quicker access - but most should update within 30minutes. Unfortunately there are ISPs which do not follow the RFC specifications and will refuse to update their DNS entries for a few days causing intermittent access issues.


UPDATE 16:27 GMT+2: Web server upgrade has been completed.
]]>
<![CDATA[BTSync]]> https://pulsedmedia.com/clients/index.php/announcements/345 https://pulsedmedia.com/clients/index.php/announcements/345 Wed, 21 Jan 2015 16:09:00 +0000
At it's simplest it only requires this when logging to shell:

wget http://download.getsyncapp.com/endpoint/btsync/os/linux-x64/track/stable
tar -zxvf stable
./btsync --dump-sample-config > sync.conf
mkpasswd
(copy the generated password on clipboard for putting it into the config)

vim sync.conf
Edit the config file as necessary, :wq to write + quit at the end, enter into insert mode by pressing insert key, esc to exit insert mode
Set the port to something other than default for the webui
REMEMBER to set username + password (hash) and disable login without password

./btsync --config sync.conf

then open your browser at the server url & defined port, login by using the credentials you put into the conf file and setup your shared folders :)

We will provide everything ready to go in a future PMSS release, but if you need the feature right now, it's this simple!]]>
<![CDATA[Espoo DC gateway issues [RESOLVED]]]> https://pulsedmedia.com/clients/index.php/announcements/344 https://pulsedmedia.com/clients/index.php/announcements/344 Mon, 19 Jan 2015 03:47:00 +0000 This announcement will be updated as inspection proceeds.

Update #1: Issue has been tracked to a routing issue which is being investigated.

Update #2 & Resolution: It was a remote misdiagnosis at first which made it look like at fault is routing. The final reason was a documentation mistake made by someone resulting in complete blackout of the site. Actions will be taken to ensure this will not reoccur.
If your server is still down - please open a ticket immediately. It will be sorted out as soon as possible, and SLA will be applied.]]>
<![CDATA[Upcoming web server and switch upgrades.]]> https://pulsedmedia.com/clients/index.php/announcements/343 https://pulsedmedia.com/clients/index.php/announcements/343 Fri, 16 Jan 2015 11:01:00 +0000 Exact schedule is still unknown, but will announced in our twitter feed as well as here shortly prior.

At the DC a switch upgrade is coming, 2 IDS switches will be replaced by a new higher performance switch. Expected downtime is mere minutes per server.

]]>
<![CDATA[Bonus disk quota ratios increased]]> https://pulsedmedia.com/clients/index.php/announcements/342 https://pulsedmedia.com/clients/index.php/announcements/342 Sat, 10 Jan 2015 13:25:00 +0000 We have slightly increased the ratios for bonus disk quota.
We doubled up the amount for euros paid total.

Current rates are:
Service active time: 0.625% / Month
Total history paid: 0.625% / 25euro
If there is refunds it deducts 0.0625% / 25euro (Bonus disk cannot be negative however)

We have quadrupled the amount for every euro paid AND increased the base ratio :)

Original numbers were:
Service active time: 0.5% / Month
Total history paid: 0.5% / 100euro or 0.125%/25euro
Refunds deducted: 0.5% / 25euro

Please note, bonus disk is only applied if there is sufficient free space on your server and is in no way guaranteed. 
We reserve the right to change these numbers at any time.

]]>
<![CDATA[Having slow speeds?]]> https://pulsedmedia.com/clients/index.php/announcements/341 https://pulsedmedia.com/clients/index.php/announcements/341 Sat, 03 Jan 2015 13:36:00 +0000 We've been getting frequent tickets since change of the year about lower than usual speeds in selection of our servers.
These has been all at a specific 3rd party datacenter.

If you are experiencing slower than usual speeds, please open a ticket with following information:

What speeds are slow? FTP, Torrent, SFTP?
Multi- or Singlethreaded?
Since when has this happened?
What kind of speeds are you experiencing and do these vary over the day?

More information is the better, simple "i have slow speeds" doesn't tell us anything at all.

We will jump right into that and more we got information the easier for us is to check the reason and potentially fix it.

]]>
<![CDATA[Happy Holidays!]]> https://pulsedmedia.com/clients/index.php/announcements/340 https://pulsedmedia.com/clients/index.php/announcements/340 Wed, 24 Dec 2014 17:08:00 +0000 We wish You Happy Holidays!


During the holidays support remains to be available, albeit at skeleton staff and a little bit slower reply times than usual.]]>
<![CDATA[Maintenance & upgrades over at Espoo DC]]> https://pulsedmedia.com/clients/index.php/announcements/339 https://pulsedmedia.com/clients/index.php/announcements/339 Tue, 09 Dec 2014 23:58:00 +0000
Almost all nodes are now online, couple nodes remain. If your server is not online within the next few hours please open a ticket.]]>
<![CDATA[Reminder: Espoo DC Scheduled Downtime *TODAY* 9th Of December]]> https://pulsedmedia.com/clients/index.php/announcements/338 https://pulsedmedia.com/clients/index.php/announcements/338 Tue, 09 Dec 2014 10:47:00 +0000
PDS Finland nodes will be down, a portion of Super100, Value, 2012 and Dragon series will be down as well.

This time is taken to do some electrical distribution upgrades and maintenance to the DC.

We are sorry that it forces a downtime, especially one which is going to take this many hours.

Servers will be brought back online one by one after grid upgrades has been done. If your server is not online by 01:00 GMT 10th of December, please open a ticket. Please refrain to open a ticket prior to that time to lessen the burden for our support staff. 

How do you know if your server is one of the affected ones? You can do a traceroute and if you see Cogent HEL pop turn or gw-es1 your node is one of the affected ones.
If your node name begins with vnode, yours is one of the affected ones.

]]>
<![CDATA[Electrical work and upgrades scheduled 9th of December]]> https://pulsedmedia.com/clients/index.php/announcements/337 https://pulsedmedia.com/clients/index.php/announcements/337 Tue, 02 Dec 2014 12:39:00 +0000
There will be in portion of PDS servers and Seedbox servers an outage lasting from approximately 15:00 GMT to 22:00 GMT.
Only Finnish datacenter servers will be affected, French datacenter servers will continue operating normally during this maintenance & upgrade period.]]>
<![CDATA[Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/336 https://pulsedmedia.com/clients/index.php/announcements/336 Tue, 18 Nov 2014 19:13:00 +0000 Sorry we are slightly short on capacity as of late, we are trying to keep some availability constantly.

]]>
<![CDATA[Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/335 https://pulsedmedia.com/clients/index.php/announcements/335 Sat, 15 Nov 2014 01:21:00 +0000 <![CDATA[Free slots status]]> https://pulsedmedia.com/clients/index.php/announcements/334 https://pulsedmedia.com/clients/index.php/announcements/334 Sun, 09 Nov 2014 04:17:00 +0000
We just received a hardware shipment with the required RAM modules to setup new hardware nodes, so we will have slots available around tuesday-wednesday as we finish stress testing the new batch of nodes.


]]>
<![CDATA[Free seedbox slots added]]> https://pulsedmedia.com/clients/index.php/announcements/333 https://pulsedmedia.com/clients/index.php/announcements/333 Mon, 20 Oct 2014 22:21:00 +0000 We hope there is now sufficient resources that there is always slots available.]]> <![CDATA[Free seedbox offer updated]]> https://pulsedmedia.com/clients/index.php/announcements/332 https://pulsedmedia.com/clients/index.php/announcements/332 Sat, 18 Oct 2014 17:11:00 +0000 View the updated details at: http://pulsedmedia.com/free-seedbox.php

]]>
<![CDATA[Out of curiosity]]> https://pulsedmedia.com/clients/index.php/announcements/331 https://pulsedmedia.com/clients/index.php/announcements/331 Sat, 18 Oct 2014 13:39:00 +0000
We collected the stats from just 50 servers and the result was rather interesting: 39 577 torrents loaded into these 50 servers. We never expected to see that high of a number, an average of 791.54 torrents per server!

The sample servers were mixed selection from Value, Super100, 2012 and Dragon series seedboxes.]]>
<![CDATA[Super100 - SUPER PRICE]]> https://pulsedmedia.com/clients/index.php/announcements/330 https://pulsedmedia.com/clients/index.php/announcements/330 Fri, 10 Oct 2014 21:16:00 +0000
For the moment Super100 seedbox with 2 year subscription is just 8.99€ per month!
Super100+ equally just 15.99 € per month!

Don't know how long these prices will last - so take advantage while we still have spare capacity to sell!

]]>
<![CDATA[Dragon 1Gbps Seedbox Launch]]> https://pulsedmedia.com/clients/index.php/announcements/329 https://pulsedmedia.com/clients/index.php/announcements/329 Wed, 08 Oct 2014 17:38:00 +0000 http://pulsedmedia.com/1gbps-dragon-box.php

A new 1Gbps Seedbox offering with awesome specs and pricing has been unveiled today!

Limited quantities available initially - more stock will be added within several weeks.]]>
<![CDATA[New servers online]]> https://pulsedmedia.com/clients/index.php/announcements/328 https://pulsedmedia.com/clients/index.php/announcements/328 Wed, 08 Oct 2014 15:03:00 +0000 We have now limited stock available for all seedbox ranges, super100, value, 2012 and dedicated.

]]>
<![CDATA[Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/327 https://pulsedmedia.com/clients/index.php/announcements/327 Wed, 08 Oct 2014 02:15:00 +0000 We should have some slots available for Super and Value series (no starter) in the next 36hrs.]]> <![CDATA[Dedicated seedboxes]]> https://pulsedmedia.com/clients/index.php/announcements/326 https://pulsedmedia.com/clients/index.php/announcements/326 Mon, 06 Oct 2014 12:36:00 +0000
Starting from just 10.49€ per month they are extremely cost effective high performance option for your seedbox needs.

See the new offers at: http://pulsedmedia.com/managed-dediseedbox.php

]]>
<![CDATA[Super100 series temporarily out of stock]]> https://pulsedmedia.com/clients/index.php/announcements/325 https://pulsedmedia.com/clients/index.php/announcements/325 Mon, 22 Sept 2014 16:22:00 +0000 <![CDATA[Espoo scheduled maintenace 21/09/2014]]> https://pulsedmedia.com/clients/index.php/announcements/324 https://pulsedmedia.com/clients/index.php/announcements/324 Wed, 17 Sept 2014 00:04:00 +0000 This maintenance is scheduled for 21/09/2014 between 03:00 CEST and 07:00 CEST. Outage/service interruption should not exceed 45 minutes.]]> <![CDATA[New rtorrent version]]> https://pulsedmedia.com/clients/index.php/announcements/323 https://pulsedmedia.com/clients/index.php/announcements/323 Sat, 13 Sept 2014 14:48:00 +0000
Ironically, these regressions only mattered because of our two tiered redundancy on rTorrent. rTorrent put in wrong process names, and our redundancy couldn't find these instances anymore.

It took a bit time to update, because the version where this regression has been fixed has other regressions, and rTorrent would fail to compile, sometimes ending in never ending loop trying to compile it - however it is now done and many servers are already updating to the newer version.

Most of the nodes with the previous rTorrent version should shortly be updated with this new one.]]>
<![CDATA[Prorata credit / service time clarfications, service pricing changes]]> https://pulsedmedia.com/clients/index.php/announcements/322 https://pulsedmedia.com/clients/index.php/announcements/322 Mon, 08 Sept 2014 11:58:00 +0000 It's time to make a clarification to the Prorata rules, in regards of latest campaigns.

First of all, easiest is to give prorata as credit. Note Credit is NON-REFUNDABLE. So many people ask us to send back the credit sum - but we are not a bank and we have never promised this service. Further there are potential money launderings laws etc. which are concerned in these things, as refunding credit equals to money service business. Bottomline, we will not ever refund or send your credit balance to anywhere.

Prorata as credit is quickest to calculate, and you can choose what you spend it for.
However, transfer of time is also available upon request.


When can you get prorata?
When you UPGRADE plan value (price) wise. No other circumstances.
Not when opening a new service with a coupon code, or downgrading to smaller service.

Example:
You are on Value Small @ 9.99€ per month, and you take Value Starter with 10% discount code: No prorata, this is a downgrade.

Example 2:
You are on Value Starter @ 7.49€ per month, and you take Value Small @ 9.99€ a month for upgrade: You can get prorata. Just request via billing if this is what you want.

Example 3:
You are on Value Starter @ 7.49€ per month, and you take Value Starter @ 6.99€ and 20% discount for 5.59€ monthly pricing. No prorata, this is a downgrade value wise.

 

Service pricing changes

Service prices change all the time, today we also change it based on available free slots, not just what servers, bandwidth etc. cost today.
General trend is downwards per Gb of storage, less so on bandwidth. However, there is a absolute maximum of how many users can be put on a single server.

We have a lot of users. This means if we change the price lower by 1€ for the most popular service - you will not get auto-adjusted, because this would mean quickly VERY big changes in overall sales in a month: We simply would not have a budget for this.
Further, the server you are on has not become any cheaper, the newer servers might be cheaper - the cost of operation for your service has not changed, but for future ones it may have changed.

Same goes vice-versa, if we increase the pricing, almost no one would accept this if auto-adjusted, many cancellations would occur, many complaints etc. So why would you demand lower pricing when prices fluctuate, when you are not ready to accept increase in pricing for the very same reasons?

Hence, all pricing is grandfathered. It will remain the same as when you ordered the service.

Most of the time lower pricing and special campaigns etc. are mostly about getting new users in, more we grow, more we can afford scale of mass kind of benefits, ie. our own DC, and more internal traffic will happen there the more users there are.

These are also some of the reasons that prorata is not available on downgrades.

]]>
<![CDATA[French DC Contract changes]]> https://pulsedmedia.com/clients/index.php/announcements/321 https://pulsedmedia.com/clients/index.php/announcements/321 Sun, 07 Sept 2014 15:07:00 +0000 Further, they are going to force next month automatic billing, with no way to opt out.
We consider this unacceptable, since they have in the past as well charged without prior consent and constant billing issues.

This means that a selection of servers will be migrated to other servers in the next 4-8 weeks.
Users on each server will be contacted prior to any changes.]]>
<![CDATA[rTorrent / libtorrent update to 0.9.4/0.13.4]]> https://pulsedmedia.com/clients/index.php/announcements/320 https://pulsedmedia.com/clients/index.php/announcements/320 Sun, 31 Aug 2014 15:52:00 +0000
With slow rollout we can catch if there is regressions, and if there is it will only affect minority of users before fixed.]]>
<![CDATA[1Gbps / 2012 seedboxes back in stock]]> https://pulsedmedia.com/clients/index.php/announcements/319 https://pulsedmedia.com/clients/index.php/announcements/319 Wed, 06 Aug 2014 10:40:00 +0000
Capacity added is ultra high performance dual cpu nodes.
To celebrate this, for just 1 week we are running a nice special!
For Large and XLarge seedboxes get a 25% one time discount on ANY subscription cycle! Use coupon code: 1408nstock-1gig
Valid only for 1 week and just a few slots so act now!]]>
<![CDATA[Hotmail/Live.com e-mail users: Please read this]]> https://pulsedmedia.com/clients/index.php/announcements/318 https://pulsedmedia.com/clients/index.php/announcements/318 Fri, 18 Jul 2014 15:28:00 +0000 We have made some changes which should help with this along with opening a ticket with MSN.

You can also try adding contacts and to safe senders list, our important e-mail addresses to your contact list and that may help some:
billing@pulsedmedia.com
sales@pulsedmedia.com
support@pulsedmedia.com

It seems MSN uses also your contact list in determining what e-mail to let through.

Remember to check your junk mail folder, blocked senders list and your junk mail settings, as these may affect as well.

]]>
<![CDATA[Hotmail/Outlook/Live (Microsoft) email users - ATTENTION]]> https://pulsedmedia.com/clients/index.php/announcements/317 https://pulsedmedia.com/clients/index.php/announcements/317 Sun, 06 Jul 2014 20:53:00 +0000 silently.
They do not go to spam or junk, and there is no bounce message giving an error either.

This does not seem to affect globally all Microsoft email service users, but some of them of as well.

In case you are not receiving e-mail from us, could you please let us know and if at all possible change your e-mail address used with us.

]]>
<![CDATA[Couple Dell Enterprise servers available]]> https://pulsedmedia.com/clients/index.php/announcements/316 https://pulsedmedia.com/clients/index.php/announcements/316 Thu, 12 Jun 2014 10:36:00 +0000 We have a couple of Dell Enterprise nodes available for immediate sale, nodes already tested, racked and stress tested. Delivery within 3 business days.

Configuration options:
CPU: 1 or 2x Intel Xeon L5520 4x2x2.26Ghz
RAM: 12Gb or 24Gb ECC
Disks: 3x 500Gb, 1Tb, 2Tb, 3Tb, 4Tb
Network: 100Mbps guaranteed, 1Gbps Burst, 1Gbps Guaranteed

Pricing examples:
CPU: 1xL5520, RAM: 12Gb ECC, 3x500Gb, 100Mbps: 42,99€ a month
CPU: 1xL5520, RAM: 12Gb ECC, 3x2Tb, 100Mbps: 57,99€ a month
CPU: 2xL5520, RAM: 24Gb ECC, 3x500Gb, 100Mbps: 59,99€ a month
CPU: 2xL5520, RAM: 24Gb ECC, 3x3Tb, 100Mbps: 142,49€/Mo Quarterly term, or 154,99€ a month

Please contact sales for inquiries and orders.

]]>
<![CDATA[Pricing updates]]> https://pulsedmedia.com/clients/index.php/announcements/315 https://pulsedmedia.com/clients/index.php/announcements/315 Wed, 04 Jun 2014 20:57:00 +0000
Value series continues to offer superior value for your money with a good range of plans. Super100 series offers a bit more punch with traffic limits.

]]>
<![CDATA[Automatic seedbox provisioning beta testing]]> https://pulsedmedia.com/clients/index.php/announcements/314 https://pulsedmedia.com/clients/index.php/announcements/314 Sun, 25 May 2014 23:15:00 +0000 Estimate a few weeks prior to taking it into use with normal paid seedboxes.

Features completed so far are:
  • Automatic account creation
  • Account creation: Server selection, based on free resources
  • Account creation: Strong random password generator
  • Account creation: welcome e-mail sent automaticly
  • Account creation: username picked and sanitized automaticly
  • Service details page: Direct link to login to seedbox automaticly
  • Account suspension
  • Account suspension: E-mailing the reason of suspension
  • Account Unsuspend
  • Account Unsuspend: Warn staff if unsuspension failed
  • Automatic recheck on server resources
New features such as automatic remote server updates on schedule, server monitoring, account termination, password reset are all rather trivial to develop at this point since the hard work, the foundation and framework has been completed.]]>
<![CDATA[bad weeks happen even in DCs ...]]> https://pulsedmedia.com/clients/index.php/announcements/313 https://pulsedmedia.com/clients/index.php/announcements/313 Tue, 20 May 2014 16:36:00 +0000
Many months without a single disk failure went by, but last 1½-2 weeks has been maddening!
During this 2 weeks 4.6% of all disks failed. Yes, a whopping 4.6% ! !
This happened across multiple disk models, manufacturers and includes both SSD and HDD.

Along this new storage arrays failed to perform due to a firmware / driver issue with LSI MegaRaid on Linux, we lost a lot of precious time with that.

I'm happy to say that the total damage is rather small tho, aside from some maddeningly long downtime for a few nodes.
During this ordeal we only lost about 25.15TiB of data capacity, which was approximately 60% utilized, resulting in just 15.09TiB of actual data lost.

Right now a total of 50.31TiB (~56TB of "Sales" capacity) is still rebuilding, and all other arrays with failed disks during this period has already finished resync / recovery cycles.

Everything should be back in order after the new arrays finish stress testing in 24-48hrs.

Research work on clustering the storage to avoid issues during these situations has already started and new storage arrays are already in the plans.
Systems with local data drives had less of a effect, and system with complete local storage were completely unaffected. Data graphs shows only a 20% dip in outgoing bandwidth from this period.]]>
<![CDATA[Client portal login fixed]]> https://pulsedmedia.com/clients/index.php/announcements/312 https://pulsedmedia.com/clients/index.php/announcements/312 Sun, 11 May 2014 12:50:00 +0000 The issue was caused by a new extra security measure.]]> <![CDATA[Issue resolved]]> https://pulsedmedia.com/clients/index.php/announcements/311 https://pulsedmedia.com/clients/index.php/announcements/311 Tue, 06 May 2014 21:29:00 +0000 <![CDATA[Issue with one of the providers]]> https://pulsedmedia.com/clients/index.php/announcements/310 https://pulsedmedia.com/clients/index.php/announcements/310 Tue, 06 May 2014 20:13:00 +0000 We are having a issue with one of the providers we have been utlizing, and we've been trying to reach them for hours to no avail. They usually reply within couple of hours, including holiday weekends.

In the past 4-6months there's been many outages with them, but it's always been a single server at a time, today, they shutdown all servers without ANY warning, and we are trying to reach someone from them to get nodes online, should be by latest tomorrow morning.

We are already working to provide replacements if this issue cannot be solved amicably with this german provider.

]]>
<![CDATA[Internal networking: Free 1Gbps between all nodes]]> https://pulsedmedia.com/clients/index.php/announcements/309 https://pulsedmedia.com/clients/index.php/announcements/309 Sun, 04 May 2014 21:23:00 +0000 Espoo Internal Network

Some of you may not realize this, but there is a internal network at Espoo to which all the nodes are connected to.
This network is made with fast 10Gbps switches, and all client nodes connect to it via 1Gbps.

You may utilize this internal network to transfer data from node to node without compromising public internet speeds, for example to have a separate database server and separate web fronted server.

This network has very little congestion, and thus speeds will be great. All links operate within the internal network at less than 20% average usage.

To use the internal network for your data transfers, just check with ifconfig what interfaces there is, the one beginning with 10. is your internal network IP. This is quite likely to remain same between reboots as well.
]]>
<![CDATA[Scheduled maintenance for Espoo Network]]> https://pulsedmedia.com/clients/index.php/announcements/308 https://pulsedmedia.com/clients/index.php/announcements/308 Thu, 24 Apr 2014 22:05:00 +0000
]]>
<![CDATA[New PDS France range of servers]]> https://pulsedmedia.com/clients/index.php/announcements/307 https://pulsedmedia.com/clients/index.php/announcements/307 Thu, 24 Apr 2014 17:48:00 +0000 Limited quantities for now, but fast setups.

Starting from just 19,99€ a month for 100Mbps servers without traffic limits! :)

Check them out at: http://pulsedmedia.com/personal-dedicated-servers-france.php]]>
<![CDATA[Outage today]]> https://pulsedmedia.com/clients/index.php/announcements/306 https://pulsedmedia.com/clients/index.php/announcements/306 Mon, 24 Mar 2014 10:24:00 +0000
This time the attack was a bruteforce attack from Serbia, since WHMCS doesn't have built-in brute force detection nor limits it eventually lead to resource exhaustion DOS.

We have tightened the security, made appropriate reports and are planning for further enhancements to resource control and brute force detection.]]>
<![CDATA[Short outage on one of the storage arrays]]> https://pulsedmedia.com/clients/index.php/announcements/305 https://pulsedmedia.com/clients/index.php/announcements/305 Tue, 18 Mar 2014 19:56:00 +0000 This caused couple dozen nodes needing a hard reboot, which caused a short 15-20minute downtime for those nodes in question.

If your node is still down, please open a support ticket as soon as possible.]]>
<![CDATA[Short website outage]]> https://pulsedmedia.com/clients/index.php/announcements/304 https://pulsedmedia.com/clients/index.php/announcements/304 Fri, 14 Mar 2014 03:33:00 +0000 Server security was not compromised, only resources exhaustion.

Our website is fairly heavy and running on rather really old hardware (if it ain't broken, don't fix it!), so the fast pace of attempts to find an exploitable security vulnerable lead to heavy CPU and Disk usage, and after our automated backoffice routines kicked in schedule, these in combination caused resource exhaustion when swapping began.

We will be working to put in resource limitations to avoid this situation happening again, along with limiting maximum pageloads per client and ultimately a systems upgrade to have even higher margin for heavy resource consumption.

Sorry for the inconvenience.]]>
<![CDATA[DIY Blade update]]> https://pulsedmedia.com/clients/index.php/announcements/303 https://pulsedmedia.com/clients/index.php/announcements/303 Mon, 10 Mar 2014 03:13:00 +0000 Earlier models were too weak, too bad layer to layer bonding etc. so we had to make it significantly more sturdy for a steady print.
The downside of this making sturdier is that now printing a complete blade at prototype quality already takes 12hrs. For production we need to move to smaller nozzle size, smaller layer height, higher infill etc. which leads us to believe that we are looking now far over 24hr printing time *per blade* on production.
That means we need significantly more printers operating 24/7 when production time arrives.

Next up is finding how well the motherboards sit into the blade, how the attachments go etc. and then printing final production quality blade for final early prototype before moving into the the 4U chassis final design: Actual physical mockups instead of just renderings.

There is still many many iterations to go, for example we realized we need to add 5V supply as well on the magnetic connectors, which also means we need to be switching ground instead of +12V for the main power supply, which necessitates some electrical redesign work. This is because the picoPSU pins are typically rather low amperage, and we don't want to risk running the disk +5V lines through it, each 3.5" disk requires 0.75A, dual disks, hence 1.5A, and those tiny pins we are not sure are they rated for more than 1A.



]]>
<![CDATA[IP Geolocation]]> https://pulsedmedia.com/clients/index.php/announcements/302 https://pulsedmedia.com/clients/index.php/announcements/302 Fri, 28 Feb 2014 07:54:00 +0000 IP Geolocation is set correctly for the IP Block but 3rd party databases still show old information. Only these 3rd parties have the power to update it, and we do not know when it will start showing up correctly in most 3rd party databases for IP geolocation checks.

So currently it displays US for most of the time, some databases show up as Germany or Switzerland.

Sorry for the inconvenience.]]>
<![CDATA[HELLO WORLD]]> https://pulsedmedia.com/clients/index.php/announcements/301 https://pulsedmedia.com/clients/index.php/announcements/301 Wed, 12 Feb 2014 14:24:00 +0000
TODAY, it has said it's HELLO WORLD for the very first time to the network with:

temperature: 24 °C -- humidity: 33 %


IT'S ALIIIVEEE!!!

It's a milestone achievement in the development process - it's good to continue from here and add the required features.

]]>
<![CDATA[Affiliate program]]> https://pulsedmedia.com/clients/index.php/announcements/300 https://pulsedmedia.com/clients/index.php/announcements/300 Tue, 04 Feb 2014 13:47:00 +0000
The default payment plan is 7.5% from recurring payments, for the whole lifetime of the referred customer, any plan they purchase now or in the future. Top affiliates also get a bump to 8.5%.

It's easy as 1...2...3 to start using our affiliate system!
Register to our billing system if you haven't done so, after login click the affiliate tab. There you will find links etc.
Post the links to your website, and you are doing affiliate marketing :)

]]>
<![CDATA[PDS14-2G Backlog catched up! Super100 availability increased]]> https://pulsedmedia.com/clients/index.php/announcements/299 https://pulsedmedia.com/clients/index.php/announcements/299 Tue, 04 Feb 2014 00:08:00 +0000
Super100 availability has increased, and new capacity is coming this month with the addition of more than 80 slots worth of nodes!
The prices reflect the fact that we are now getting to move fast towards our own hardware in Super100.


]]>
<![CDATA[PDS14 8G and 16G Backlog catched up! ]]> https://pulsedmedia.com/clients/index.php/announcements/298 https://pulsedmedia.com/clients/index.php/announcements/298 Fri, 31 Jan 2014 19:34:00 +0000 After many many months of huge backlogs it's a sweet relief to catch up finally. Infact - we even have couple of spares left over! :O

So couple of PDS14-8G is available for order from now :)]]>
<![CDATA[Dual Opteron Hexcore 16G and Dual Xeon Quadcore Servers available]]> https://pulsedmedia.com/clients/index.php/announcements/297 https://pulsedmedia.com/clients/index.php/announcements/297 Tue, 28 Jan 2014 12:01:00 +0000 Tested, racked and ready for provisioning no later than 21st of February.

Disk options are 1-3x 3.5" HDD, Sizes available 1Tb, 2Tb and 3Tb for SATA 7200 and 146Gb for SAS 15k RPM.
Network options 1Gbps best-effort, 1Gbps business (full 1Gbps 24/7 guaranteed)

Pricing starts at 75€ a month with single 1Tb drive and 1Gbps Best Effort.
1Gbps Business + 3x3Tb SATA 7200RPM only 349,90€ a month.

Contact sales@pulsedmedia.com for quote on your desired specification!

Upon order also Dual Xeon L5520 72Gb servers available, with upto 4x3.5" SATA 7200RPM, lead time for delivery 4 weeks freight + 1 week for final testing.
Pricing for these starts at 99,90€ a month with single 1Tb drive, 4x 3Tb for 199,90€ a month. Business 1Gbps + 4x 3Tb for 399,90€ a month.

Both models have IPMI available upon request.]]>
<![CDATA[Bitcoin as a payment method]]> https://pulsedmedia.com/clients/index.php/announcements/296 https://pulsedmedia.com/clients/index.php/announcements/296 Tue, 28 Jan 2014 09:37:00 +0000
We've accepted Bitcoin for many years now, and during payment you just need to choose it to pay using bitcoins instead of Paypal or wiretransfer.

Payments are handled by BitPay at the current exchange rate.]]>
<![CDATA[Redirecting efforts from Smart servers to Bare hardware (PDS series) and lessons learned]]> https://pulsedmedia.com/clients/index.php/announcements/295 https://pulsedmedia.com/clients/index.php/announcements/295 Sun, 29 Dec 2013 04:27:00 +0000 What we tried to accomplish with the PDS series and our own DC was essentially Super Smart Servers, we've been working to make it a reality furiously for the past 6months, but never got to realize even half of it, while backlog kept on piling etc. This has been to-date the most expensive project Pulsed Media has undergone, and the most extensive as well, encompassing work starting from electronics, power distribution, networking, finally to actual motherboards, software, storage etc. and i'm sad to say we have to redirect our course of efforts.

For the time being, we are going to redirect our efforts on bare hardware setups, this means local drives, PXE setup etc. things to redevelop on our infrastructure. It will *greatly* simplify our setup and stabilize things.

What is Smart Server then?
Basicly, in the cloud era, while virtual private servers are deployed from SAN, provides absolutely painless management, reinstall, migrations from HW node to another etc. There's no server provider which such features for the barehardware, essentially providing barehardware with the flexibility of vps and the performance of dedicated server. This is what we tried to achieve. Some of the proof of concept work was done all the way back in 2011.

The biggest thing we needed was to achieve storage flexibility, scalability and steady performance. We wanted to start simple, and build our way vertically as clustering software matures to a point we can utilize them. We never realized there's no such thing as simple and cost effective in the storage industry.

Reliability, Performance and Cost is all important to us, in order to be able to compete with the big vendors we needed to achieve exemplary storage efficiency. This meant that our plan was to use an array of inexpensive SATA drives with a SSD caching layer. Naturally, if there is 1% of SSD of the pool size, we expected minimum 1% hit rate at SSD performance levels (40k+ IOPS, 400M/s+ in our maths). Nothing could be further from the truth.

SSDs are a joke in the server environment.

I'm serious. They are an absolute joke. Their failure rate is so insanely high, that slightest issue with a node and the instant assumption is SSD has failed. Even when it's brand new, out of the box and been in use less than 10mins. Of course, many of them do work, but their failure rate is stupendously high.

SSD performance in a big array and as a cache is pathethic.

The bottomline is that almost no caching software has any kind of sane algorithm behind it, it's basicly random blocks in the cache unless you have SSD caching in the range of 5%+. When it's the correct blocks the SSDs go into crawl level because no caching software implement TRIM/DISCARD functionality. They mystically expect the server admin to do that for them (?!?). This is true for the convenient methods suitable for our use, we couldn't use something like bcache which needs a format to take into use, potentially leading to reliability issues. Reliability is a big question when you are running potentially hundreds of end users from that single array (20+ disk with big SSD cache arrays).

SSDs are good if you can dedicated it fully to a task, like a RAID10 array of SSDs for DBs, or desktop computers, they are absolutely brilliant on that use scenario.

We tried ZFS L2ARC, EnhanceIO, Flashcache among others. Lastly we tried autonomous tiering but CPU utilization + latency became an issue with that, performance until the core was fully utilized was simply brilliant, but since the software is not properly multi-threading and apparently inefficient CPU utlization it didn't work for a big array on random access pattern. For sequential loads ZFS performance was absolutely the best we've seen, however ZFS falls flat on it's face on random access and linux implementation on reliability as well.

Further issue was that all but L2ARC wanted to do all writes to the SSDs, wether or not on writeback mode. When you are expecting continuous loads of upto 300M/s+ on writes that becomes an issue.

All but the tiering method suffered from wear leveling issues, it was left to the firmware which meant you could only utilize about 75% max of the SSD because activity was so high that 10-15% was not sufficient for wear leveling. Firmwares as well struggled to do proper wear leveling, even the best SSDs did occasionally fail at this. We tried most of the major brands. With the tiering method we went overboard and put of 30% of the SSD capacity for wear leveling, so it's not entirely certain is it because of the tiering software, or so much taken off that performance doesn't degrade over time, but there's no easy way to test this neither, and we prefer to err on the safer side when it comes to production systems.

ZFS: Anything but what it's advertised for

ZFS L2ARC warm up times were ridiculous, the warmup could be 2 weeks (seriously). Further, ZFS linux implementation fails on reliability very badly. On our first, and last ZFS box we used the best feeling and looking premium SATA cables we had and solid feeling power connectors, but the premium sata cables were actually worst i've ever seen: Those sata cables had failure rate of 80%+, further this batch of HDDs had a 60% failure rate, along with some of the power connectors failing miserably.
This lead to intermittent connection issues to the hard drives, sometimes causing a 10-20sec "freeze", sometimes the drive would disappear from the OS.
Unlike any RAID array we've ever encountered, ZFS didn't go to read only mode. W.T.F?! It continued writing happily on the remaining disks, with only reading manually the status of the array telling it's degraded.
When the drive was reconnected: It would start resilvering (resync) with now faulty data. Most data was readable (80%+) up until that point, but due to the resilver/resync process ZFS linux implementation would corrupt the remaining data, and there was no way to stop or pause that process. We ended up loosing a huge amount of customer data.

ZFS Random access performance was ridiculous as well. Our node had 13 SATA drives + couple latest model big SSDs for L2ARC, and it could peak only at 3 disk worth of random access. We were baffled! Afterall, big vendors were advertising ZFS as defacto highest performance around. After a little research and lurking in the ZFS On Linux mailing list, it was found out that the way ZFS is designed it doesn't work for random access *at all*. Every single read or write would engage every drive on the array.
Only way to mitigate this was to have multiple VDEVs, ie. Mirror, ZFS Raid pools. This meant that highest random access performance (our load is 95% random) would be 50% of the hardware performance, while only having 50% of the capacity in use. This is a total no go in environment like ours: Customer demand is high storage with 1-2 disks worth of performance. We would need to double up the disk quantity which would mean huge cost increase, not just outright HW purchases, but also in electricity consumption, failure rates etc.

Further, the failure in reliability was a total no go. A raid array should be by default sane enough to stop all writes if drives are missing.

L2ARC was not usable performance wise neither: The maximum each SSD drive, the fastest on market at the time, would benefit us was slightly over SATA 7200RPM disk speeds, occasionally peaking to almost 2 drive's worth. The good thing with L2ARC + ZFS was that certain things were actually insanely fast. Things like our GUI loaded up as fast as end user internet connection + browser could. In that sense, ZFS L2ARC caching algorithm is by far, orders of magnitude better than anything we've seen since.

iSCSI Backend

We started utilizing ISTGT because it was the default on some big name vendor's products, which were based on *BSD. However, after a while it was clear ISTGT is anything but ready for major production: On Linux we couldn't online load new targets etc. but had to restart the daemon, and it was going to be hard work to hack ISTGT for multiple daemons to increase performance etc.
Every time we added a new vnode or removed there was high probability that another or all other clients would go into read only mode due to the restart time taken. This obviously is a no go. ISTGT has a method for reloading the targets, but this does not work under Debian 7, after researching into this, it looked like that functionality was not finished.
ISTGT as well lacked any proper documentation

Hindsight 20/20: ISTGT has been so far in the end the most stable and highest performing one and easiest to manage.

We moved towards LIO, using targetcli, since this was built into Linux kernel and had some great performance promises and features were all there what we needed, including thin provisioning. That was on paper. For a new node thin provisioning doesn't really matter, so we didn't look into it initially, but after a storage node has been running for a month or two, it's a must have. There was also some performance regressions. One would expect that something built into Linux kernel would have even semi decent documentation and everything open source.

Looking into the documentation: There's not a single open piece of documentation about thin provisioning, explaining how to enable it. Looking from the software it requires some obscure seeming parameters which are not explained anywhere.

Same goes for the management API: Something we def need since as is, targetcli/LIO requires HUGE amounts of manual typing, mostly repeating yourself. Something a script could do with one line command, ie.: ./setupVnodeIscsi.php vnodeId vnodeUser vnodePass dataStorageInGiB osTemplate
Right now we have to create the files manually, and type something like 30 lines or so for each vnode manually, a little of this can be copy pasted reducing the risk of typos, but typos still happen. There is no exampls of how to use the API, only documentaiton is reference manual of commands, nothing really how to utilize it. The documentation is at: http://linux-iscsi.org/Doc/rtslib/html/
Hindsight: Looking into configshell and it's code might have helped in figuring it out.

Don't be fooled by the URL. It say's Linux-iscsi.org but really it was RisingTide Systems LLC site, now owned by Datera. As a business man, i congratulate them for their brilliant marketing work: They have marketed themselves as the de facto Linux ISCSI open source solution - despite reality is that "community edition" users are just beta testers for them.

We approached RisingTide systems and later on Datera, before Datera acquried we didn't even get a reply. Finally when Datera acquired them we got a reply, we were curious where the Core-ISCSI files had gone, only links i found directed to kernel.org but they were not there anymore: Turns out they had turned Core-ISCSI paid only closed source, part of RTS Director/RTS OS. Datera didn't even bother to tell us the price. I doubt the price would make any sense.

RTS OS, without the cluster management features of RTS Director that is, would have had price tag of 950 euro per node. Considering that most of our storage nodes would be 24 disk arrays with max 8 SSDs, the combined maximum cost being around 6000€ per storage node, that would mean roughly 1/6th price increase even on that case, and bulk of our storage nodes were at this point 10 disks max with 1-2 SSDs the cost is unacceptable. Never mind no proper info would it benefit at all, would we receive support, could we utilize SSD caching etc. what features it includes etc. Combined to the fact that a reply i got from them i considered having kind of attitude, the reply made me feel like the attitude is "Muah! You are so scr**** now that you are using our products, you are just forced to pay for it!". I also got the sense that only way to get proper performance out of LIO was with the Core-ISCSI, they wouldn't bother to optimize for the way Open-ISCSI communicates.

By suggestion of a friend we looked into Open-E, but this has a price tag of more than 2000€ per node per year, but would at least have evaluation option for 60 days. We decided against it, as it would force us to use minimum 45 disk pods and that is too risky to put so much business in a single storage node at our current scale. But it uses SCST in the backend, and apparently has very high performance charactestics, we decided to look into SCST. We never iSCSI authentication to work due to lack of proper documentation, and this is the point where we are now at.
SCST has a multitude of ways of configuring it, and the official documentation is basicly snippets of code copy pasted, so really no help at all. Not once in my life have i been confused and dizzy after reading documentation, SCST has been the first one. Obviously a no go, such a high maintenance endeavour with custom kernels, compiling tools from repo etc.

Managing storage nodes: Not as easy as it sounds

There are many dangerous aspects, just last week we had a disk failing while raid was rechecking the array, resulting in completely broken array since the failed disk was different from the one which was being resynced. Caveat of RAID5.
Resync would always cause the storage node to crawl, and without forcing it to do high speed (end nodes crawling) it would take weeks of no redundancy(!!!).
The obvious conclusion form this is that Legacy RAID methods don't work for what we are using.

Looking into the future: Ceph

We are waiting for some developments on Ceph to start testing that on the side, the features should be released by Summer 2014. However, there are serious performance considerations with Ceph. By the scarce benchmark it looks like Ceph performance is very bad for what we are trying to achieve, but we will not know until the time comes.

We are hoping that by then the tiering software has become better or there is better SSD caching solutions available, if Ceph management is good, we would develop gateway machines with RAID10 SAS 15k drives + SSD caches and huge amounts of RAM between Ceph and SAN.

This would mean we would have 2 discreet performance and reliability domains: Ceph cluster and Gateway. Ceph cluster would provide the bulk storage at a premium on performance, and the Gateway machines would provide the performance. SAS 15k drives provide stable performance at 2.5-3x SATA 7200RPM Random IOPS performance, and smaller models don't cost *that much* and with stable high performance they will far outperform SSD with the downside of larger electrical draw.

If a proper algorithm and sane logic SSD caching software comes along, they could proof to be the key to drive the ultimate performance for end users to where it needs to be. We want to be able to offer to every single node 4 SATA 7200RPM worth of random io performance. Currently we achieve more like 1 disk worth.

With the management features of Ceph we can have multiple failure zones and storage domains. By defining "Crush maps", we can make it so that the redundancy backups are on other server room, or even on entirely different building to ensure maximum data reliability. With the upcoming Erasure coding features, we can have say 10 disks for redundancy from 100 disks, and failure of 10 disks simultaneously would result to no data loss what so ever. Due to the automation of Ceph, it would immediately begin make new redundancy copies without waiting, so in a day or 2, from the remaining 90 disks 10 are again as redundancy, only drawback being less free storage.
Complete freedom of storage capacity and redundancy will ease management by a lot. In theory, with sufficient backup, when disk failures happen, we don't need to react it to at all, we can just go and swap the failed disks once a week, instead of immediately, taking the burden away from our support staff.

Further, we can then add storage as we go, for example, just racking 10x24 disk chassis's ready to accept disks, put 1 disk o neach and have a 10 disk array ready for use, and as storage is getting used just go and add the quantity of disks required. This would make things so so much more efficient.

For example, in that scenario, we could just rack all the 100 or so nodes waiting for assemble + testing + racking right now, and start putting them online, as nodes get taken into production just go add more disks.

Since performance is made on the gateway machines, the Cluster disk quantities don't affect that greatly on the end user performance for it to matter. We could simply maintain a healthy 30-60TiB buffer on storage and utilize whatever disks happen to be on hand or at the shelf at our vendors.

*This is what we wanted to achieve to begin with!*
But too many unexpected issues arose that we never got around to coding the management system and too many caveats were discovered in key pieces of software to make it a reliable reality. As a small business we simply lack the resources to hire developers to develop something like Ceph with the performance characteristics we need ourselves. That would need a dedication of several thousand man hours at the very least. ie. 3-4 person development team to get it achieved in sensible time frame, and these 3-4 persons need to be all A players in their respective development fields.

Other hardware issues

We had plenty of other hardware issues as well. Some of them funny, most of them totally unexpected. For example, failed power buttons. Duh! One would never expect a *power button* to fail :)

Some motherboards would fail network boot, unless hard power cycled. This is a totally wierd issue, happens every now and then on all types of motherboards. Power cycling solves it. They would simply not even see there is PXE firmware on the NIC.

Pierced Riser cables: Occasional networking failure, disappearing NICs etc. This took a while to notice what was causing it. Since we are currently stacking the nodes close to each other, sometimes the upper node motherboard's bottom component pins would pierce riser cable of the lower node, causing the NIC PXE firmware to load, poor networking performance, random crashes etc. This one was hard to troubleshoot, as they always looked just fine, and required a bit of luck to even notice this was happening, some of the mobos were just 2-7millimeters too close to each other. Since then we began putting hot glue on the riser cables as extra protection and not assembling the mobos as close to each other.

Poor networking performance: Bad NIC chips or overheating. Some RTL chips have very weak performance, a, b and c models of RTL 1Gbps NIC chip is a no go. Only e is now accepted. A few times poor networking performance was the result of a nic chip overheating. Duh! one would expect that if they use such a big process node it creates significant power draw (we found out that upto 4W !! ) they would put even a small heatsink on it. Ever since, we've been adding tiny heatsinks to the chips which don't have, even the latest model Intel GT NICs which are "Green" and very power efficient.

Overheating chipsets: Some mobos don't have heatsink of the chipset, since some came used to us, they were all brown around the area of chipset huge chip. The odd thing was: This issue was first noted on *Intel* motherboards, duh! Initially we were "OK! They have designed these for passive cooling on cramped space and decided that no cooling is required". Eventually tho, we started adding heatsinks on all  and every one of them.

Heatsinks: Since those 2 above, we've been adding heatsinks to some chips despite no obvious need. With some motherboards we put them under load, and used a infrared thermometer to gauge the chip heat levels and added heatsinks to the chips heating beyond 40C. Some ran as hot as nearly 60C after just 5mins or so!

NICs, NICs, NICs.... We had to go through 10+ models until we found what we want to use... The best Intel has to offer. This too took months to realize that despite RTL advertised specs, they are not up to the task at all. Now we use only 2 models of Intel cards for bulk of our mobos: Intel PRO 1000/GT and Intel PRO 1000/MT. These cost multiple times more, but less headaches, better performance makes it all worth it. We still use the latest RTL 1Gbps chip on the PCI-E adapters, but most mini-itx motherboards have a PCI 32bit connector.

MOBOS: Initially we assumed that all mobos delivered are working and useable. We found out that is not true, we had as high failure rates per stack as 60% ! even new Intel mobos tend to have significant failure rates initially, but then again, with those they were an experimental model to begin with, and many of which were engineering / tech samples which were delivered. Eventually intel stopped the production of the motherboard DN2800MT. Shame! We really really really liked this mobo, only thing it lacked was second 1Gbps network adapter.

Network booting (PXE booting): To add to the per stack failure rates is a bizarre Linux network booting bug: After loading initramfs Linux would use DHCP to get IP address + network boot info (iSCSI target info). On some mobos this fails, on any of the nics, some mobos it fails on the other NIC. Fortunately, this issue rarely happens on addond network adapter card and is usually a symptom of other issue (see riser cable above). Certain MOBO models would claim PXE, but fail at various stages of the boot up stage. Some mobos would refuse to load addon NIC PXE firmware. Most often these issues were with Foxconn brand mobos, which is wierd since Foxconn MFGs so many of the mobos which are in production!

RAM: This mostly pertains to actually acquiring the models, since most ATOMs still utilize DDR2, and DDR2 memory modules are in very scarce supply nowadays since MFG has been stopped or nearly stopped on these. This means 2G modules are *very* expensive on retail. We have acquired A LOT of the memory modules via eBay since our local vendors dried up from supply really fast, and we need 2G modules for the bulk of the mobos. Most of the memory ads in eBay are a scam in one way or other, they go beyond and above any sane effort to scam you into buying server ram or 2x1G kits instead of the seemingly advertised 2G modules. Now we have useless 1G modules in the *hundreds* on shelf, and useless server memory modules in the dozens. Duh!
On the other hand, we have found out that second hand memory modules have freakishly low failure rate. We've had higher failure rate with brand new modules than used. I think the grand total of failed used modules is in the vicinity of 4 or so, while new modules at least double of that!

SATA CABLES! Huh! Don't get me started on this topic! Bottomline is: The cheapest looking red cables seems to be the most reliable. If it's premium cable, it's trash, just cut it with sidecutters and throw it away. You don't want that pain. I'm serious. If it's a normal cable, infact, cheaper the better, locking clip or no locking clip, the more reliable it seems to be. More expensive it is, that much more likely it is to be crap. But hey! That's science for you! That's why Mythbusters so love to do what they do (apart from getting to blow stuff up): The totally unexpeted results. This is one of them. We didn't do proper science of these (ie. didn't write down the exact failure rates) BUT the verdict is beyond obvious: Now we throw certain types of SATA cables away immediately. For example those black ones with white strip with metal locking clips: Almost none of them actually work.
If you have periodic disk slow downs, or disks disappearing: Swap the sata cable first. If it persists, then swap the power connector and issues are very likely gone. if not, the disk is failing probably, seagates fail grafecully like this, giving symptoms before eventually failing. Other brands tend to just go off.

Hard drives! Out of many models and brands we've tried, Seagate Barracuda offers the best performance, price and reliability ratio. Despite our first batch being screwed up with 60% failure rate, which was obviously messed up in freight (obvious physical damage on connectors etc.), remainder has been very reliable. Also, performance is greater than that of WD anywhere near the price. Only WD Black is able to compete on today's drives, it's just sligtly faster, but the power draw is so much higher and the price is 50-60% more it makes no sense to get them. Also, it looks like Seagates are the only ones failing "gracefully", ie. giving symptoms before failure. Also on, RMAs, we have concluded it is enough to say that these don't work on our workload and they get swapped, no hassle, no fuss. If we say that these are having symptom X, even if they can't recreate it, the disk gets replaced.

There are tons of hardware issues i've now forgotten, these were the most memorable ones, there has been dozens upon dozens of little gotchas, like how to route power wiring, power led + button wiring and their attachemnt (we now use hot glue to put them in their place). We also need to modify all the picoPSUs for the right connectors, and all the PSUs we use for the connector types we use etc.
There's been network cabling gotchas as well. Btw, we've spent thousands on wholesale network cabling alone! :O There's just so freakishly much cabling going around, many colors, many lengths, managing them etc. It's hard work to put network wiring for a stack of nodes!

Software issues has been plentiful as well, many of which has been described above with the storage issues. One of the most stupendous and annoying ones is Debian installer bugs tho! Since we still have to install Debian on so many nodes with local drives, it's been annoying as hell.
Debian installer is nowadays so full of bugs it's insane it never passed into distribution! First of all, you need to make sure you are installing on SDA, or at least your boot sector is going to be on SDA. You can forget installing debian on 2Tb or larger disks, pretty much guaranteed failure. UEFI bios as well causes quite a bit of grief. The partitioner sometimes don't allow you to make the partitioning you need to make a bootable system. Installing on RAID1 seems to be a crapshoot as well.
For some one with deeper knowledge, it's just easier to manually partition by hand in rescue system and debootstrap the system than even try using the installer.

So what now?

We are going to take our time to develop the smart server system, we tried to achieve too many things at once, too many things are being in flux and moving constantly, that no one is able to keep track of things. Management has been a total pain in the storage system, we've been in such a hurry to get produciton up and running, but all the little problems have sucked all the time we've got etc. We are basicly putting delivery of these on a halt, and going to traditional setups with local drives etc.
If you have an open order: Don't worry, your node will eventually be delivered. We just don't want to make any hasty mistakes for the final storages as in order to free up the maximum number of disks.

We would have LOTS of capacity to deliver LOTS of suboptimal smart servers, but we don't see that as a good choice: Bad manageability, performance not upto our standards, bad reliability. 

So we are moving in short order to the local disks variety, since this can simplify the setup by removing second nic, and each node with their own local drives creates what i call smaller reliability zones, for disk failures, less nodes are affected (just the one in question, unlike right now 10 might fail in a bad occurence). This means we need to develop some things before we get upto speed with that, we want to finish our DIY blade design for one, and we need to do a PXE installer setup as well. Fortunately, for PXE server installation there is readily available software both open source and closed source commercial applications, but all of this takes time.

In the meantime, we have enterprise varierities you can order, Dual Opteron and Dual Xeon options, these have a known lead time based mostly on freight time from the USA. All of which you can get IPMI/DRAC for remote management, this means you can opt to install the OS yourself etc. It's essentially a barebones leased server option, where everything is tailored to the customer, 1 to 4 disks on each system, of your chosen type and size etc. We also have a limited supply of 20 and 24 disk options and a few 16 bay DAS's for utilizations. You can opt for SATA or SAS, HW RAID etc. 100Mbps or 1Gbps, Guaranteed and Bulk volume varieties.

Just contact sales if you need such.
We still also offer leased servers from 3rd party DCs just like we used to, just contact our sales with the specifications you need.

Custom DIY Blade design is expected to be finished around march or so, and at which time we will not offer an preorder, all orders would be deliverable within 1 business day.

]]>
<![CDATA[Super100 in stock + Superb 2012 1Gbps XMAS Promo!]]> https://pulsedmedia.com/clients/index.php/announcements/294 https://pulsedmedia.com/clients/index.php/announcements/294 Sun, 22 Dec 2013 14:33:00 +0000
Setups before xmas if ordered by Monday afternoon!

http://pulsedmedia.com/super100-seedbox.php

1Gbps 2012 Series Superb XMAS Promo!

33% Off from ANY 2012 Series Seedbox when ordered for 3 months or longer!
Discount is recurring so price is yours to keep as long as you have service active! :)

Use code: 1gbpsXmas2014
Checkout all 2012 Series offers: http://pulsedmedia.com/1gbps-seedbox-2012.php

]]>
<![CDATA[Super100 pricing]]> https://pulsedmedia.com/clients/index.php/announcements/293 https://pulsedmedia.com/clients/index.php/announcements/293 Sat, 21 Dec 2013 09:47:00 +0000 Unfortunately we had to raise the price of Super100 series due to rising costs. This series was based on a very special server arrangement which is no longer available, and as of yet we don't have steady supply to set these up on our own DC.

Further, it was based on very slim profit margins, which despite being very high performance and very high value, has significant turnover. Since we have to give a few days extra for potential payment to arrive before termination and other administrative tasks, too frequently we end up getting less for a server than our cost for it is.
As an additional factor is the limited availability of servers for this range, and the big demand.

Therefore, we unfortunately had to raise the price for now for new signups to the Super100 series. This increase in pricing will allow us to get new servers more efficiently for the range of services, and eventually have it available at demand.

]]>
<![CDATA[Stock status update + order information]]> https://pulsedmedia.com/clients/index.php/announcements/292 https://pulsedmedia.com/clients/index.php/announcements/292 Tue, 03 Dec 2013 08:20:00 +0000
We should now be back in within 2 business days setup schedule in these services! :)

Super100 continues to be out of stock.

PDS14 dedis still have a major backlog unfortunately.

No more preorders - no more wait queues - no more backlog: NO BACKORDERS!

We have decided that we do not accept orders before capacity is online any more - we'd rather just pust things out of stock - despite accepting orders before servers are online was sound business wise to eliminate waste server resources by having servers constantly full.

We have now closed sales on all services which don't have immediate stock, or beyond shadow of doubt availability within 2 business days up to the amount we have listed for.

If a backorder option becomes available, it will be clearly marked as BACKORDER.

If you want to place backorder just e-mail sales@pulsedmedia.com and we'll set you up.

]]>
<![CDATA[Stock status update]]> https://pulsedmedia.com/clients/index.php/announcements/291 https://pulsedmedia.com/clients/index.php/announcements/291 Mon, 25 Nov 2013 02:33:00 +0000 We are basicly out of almost every single service we got.
Super100 series is heavily backlogged - but this series is also getting the most new servers, so a few slots remain open.
Value series is completely full, 2nd in queue to get new servers setup.

PDS14 -> 2Gs are best in stock, so we allow purchases for those, but do expect 1month or longer turn over at worst. 4Gs no stock, and 8 & 16G models only a few.
As soon as storage is worked out we should catch the full backlog of PDS14 nodes, minus those with custom distro request.


2009+ Series: We actually have some free spots *right now*. Turnover a few business days until setup, once a week done.

]]>
<![CDATA[Slow progress at our DC, but light at the end of tunnel]]> https://pulsedmedia.com/clients/index.php/announcements/290 https://pulsedmedia.com/clients/index.php/announcements/290 Sun, 24 Nov 2013 03:51:00 +0000 Due to many setbacks progress has been fairly slow at Espoo right now, despite hardware being plentifull.
As soon as we solve all these setbacks we should be able to fairly easily *double* up the quantity of nodes running.

In the meantime, we have decided to do a number of local drive nodes, a small batch, to get something new online ASAP. This is a bunch of 8G, 16Gs and 2Gs.
Those nodes loose the main benefits of our setup - but they get online ASAP with guaranteed performance.

Some of the setbacks has been on the level of ridiculous, for our new type of storage mobo + cpu combo doing our first unit we had to go through 4 CPUs, 3 Mobos and few sets of ram when finally identifying the issue: Despite the CPU being on the supported CPUs list, this is not so, as soon as there is significant CPU load the system would crash. This wasted practically more than a week of our time in regards of that!

Bahamut: Already taken ridiculous time to get online, our new model of big storage units, only to turn out i personally made a configuration mistake and we have to migrate all nodes out of it, to reconfigure the whole array and system. Duh! Further, we noticed that the SMART Serial # and on drive Serial # of the OCZ SSD drives we are using do not match - we need to in any case check those up so we are able to replace failed drives.
Oh well, we'll upgrade that a bit on the process.

The good news is we have plenty of hardware!
There is about 50 2G nodes ready to be assembled and deployed, a bunch already tested waiting for final assembly + racking, 10 or so 4G nodes waiting for assembly + testing, 10+ E350 nodes for 8G/16G, and on rack we have more than 10 unused nodes waiting for storage :)

RAM, we seriously have A LOT of it, just a shipment of 65 modules of 2G DDR2 arrived, we have some 30+ 1G DDR2 modules on shelf, about 30x 4G DDR3 modules, about 10x2G DDR3 SODIMM, 10x4G DDR3 SODIMM (For the i3s), Some 8G DDR3 SODIMMs, 20x or so 2G DDR2 SODIMMS and so forth.

Riser cables for probably 100 nodes or so, and NICs for something like  40 nodes, picoPSUs for 35 nodes + some waiting delivery.

Drives we have some 20xSAS 15k 146Gb, 10x2Tb SATA, 8x3Tb SATA waiting on shelf. Ramuh is waiting to be finalized and has 10x3Tb + SSD Cache. Bahamut was still way under utilized and can host 10 more nodes after fixes are done. Storage we have waiting to be taken into production for 50+ nodes or so.

Our custom blade design has progressed a bit as well, waiting for the first prototypes to be 3D Printed. We also designed some misc pieces to be 3D Printed for our usage. On that front also made orders for oversized 3D Printer so we can print multiple blade tray's at once etc.

Cooling upgrades are under way again, to upgrade the flow to circulate the air every 3minutes for our whole colocation room. Let's see how that goes! :)

We have been planning the future - after this huge batch of nodes we will probably settle for roughly 20-30 new nodes a month, increasing monthly at a slow pace. That's not too many, but a steady supply is a good thing.

]]>
<![CDATA[Espoo network test file]]> https://pulsedmedia.com/clients/index.php/announcements/288 https://pulsedmedia.com/clients/index.php/announcements/288 Mon, 11 Nov 2013 04:10:00 +0000 You can download following file: http://static.pulsedmedia.com/1GiB.bin
Or ping/traceroute: static.pulsedmedia.com / 149.5.241.66

Ping results:
FR/Ovh: rtt min/avg/max/mdev = 43.792/47.357/51.838/2.647 ms
FR/Ovh server 2: rtt min/avg/max/mdev = 43.709/48.599/52.730/2.528 ms
FR/Intergenia: rtt min/avg/max/mdev = 34.359/37.893/41.736/2.342 ms
NL/Leaseweb: rtt min/avg/max/mdev = 31.649/37.162/40.665/2.137 ms

Test downloads FROM Espoo:
FR/Ovh: 9.03 MB/s
FR/Ovh server 2: 8.53 MB/s
FR/Intergenia: 11.1 MB/s

Both OVH servers were obviously shaped, Leaseweb was shaped to oblivion, despite each of these servers being a low network usage server.

]]>
<![CDATA[large shipments of nodes, 3d printing etc.]]> https://pulsedmedia.com/clients/index.php/announcements/287 https://pulsedmedia.com/clients/index.php/announcements/287 Wed, 06 Nov 2013 09:15:00 +0000 Yesterday we received 2 large shipments of nodes and gear for DC.
This had a total of 57 motherboards, most of which are atoms, 3 for storage nodes.

Several highend, 80+ Platinum Seasonic PSUs, big bags of ram, a bunch of NICs (20+ or so), and a bunch of SATA 7200RPM drives, some small RAID adapters etc.

We are expecting couple more big shipments during this week, or early next week, which should contain 10+ AMD E350s, and a bunch of Atoms more. Then the smaller shipments of 10-20 items each of NICs, picoPSUs etc.

We should have pretty much everything in hand to build about 50 nodes now, including storage for them. The latest big storage node is still almost empty, Solo 7 hasn't been taken into production yet, and remaining parts for Solo 8 are about to arrive this or next week.

Further, the NICs to be used in all future nodes has been upgraded, and we are also updating the older ones, stack by stack. Almost every single node will receive a high quality Intel NIC from now on, with very few exceptions, with the exceptions being mostly PCI-e NICs, for which we have tried to get higher quality NICs as well.

We are going to be insanely busy assembling all these units! And going to prep for a night when we get a bunch of friends as well to help us, so that we can properly mass produce these. Meaning that the motherboards get assembled real fast for testing. For example, 1 guy unwraps the mobo and installs RAM, next guy installs buttons + leds and hot glues them in place, next guy puts in riser card + nic and writes down the MAC address, next guy places the NIC in place.
Then the finished pile is taken to a testing station, where about 5 of them are powered up at a time, and booted up to see if it everything functions. This software testing phase takes about 5mins per board minimum.

50 nodes to be tested, so time taken to do all that:
Testing 7.5min average: 375minutes
Unwrapping everything: 3mins each, 150 minutes
Placing all components on the board: 5mins each, 250minutes
Hot glueing, and attaching the components firmly: 5mins each, 250minutes
Total before racking: 17 hrs

Building the stacks, writing documentation, 8 nodes per stack, 6 stacks, 60 minutes each: 6 hours
Prepping the PSUs: 4 PSUs, 60 minutes each: 4 hrs
Prepping the relay boards, 3 boards: 90 minutes each: 4½ hrs
Racking and wiring: 30minutes per stack, 3 hrs total

So we are looking at about 34,5 hours total minimum before getting to start booting them up.

That is still excluding the networking work, building and testing the storages, rack preparation, cooling preparation etc.
So before we have all these units online, we are easily looking 2 weeks of work.

But hopefully, by the end of November, we have a nice surplus amount for within the day deliveries - except most likely our resellers at that point of time will want to take all available units, except a few.

But, then comes december with MORE nodes to enter production, next big storage unit, another big shipments of hardware! :)


3d Printing:
Our "homebrewn" 3d printer made it's first successfull test prints yesterday! Calibration and optimization work remains, speed and print quality needs to be brought significantly up. Then we can begin on prototyping on our DIY Blade Chassis's.
We are contemplating on making the designs open source and or selling finished units when we reach mass production on those, excepting around january.
Initial design will incorporate 16 units per 4U. In this 4U there is 6x cooling fans, 1x PSU, 1x Relay board. The Blade trays will be cooled via underpressure, ie. the fans suck through the trays, this way we get more even airflow through all the blades, given the trays are sufficiently restrictive, and can have fewer number of fans while maintaining adequate airflow for low temps operation.

The airflow gets into the middle of the blade chassis, where is venting holes for venting the hot air through upwards through the middle, and all the way to the top of the rack.

This is the initial design, only testing and production will show us what needs improvements.

]]>
<![CDATA[static content server update]]> https://pulsedmedia.com/clients/index.php/announcements/286 https://pulsedmedia.com/clients/index.php/announcements/286 Mon, 04 Nov 2013 02:40:00 +0000 There should not be any downtime, or any signs of this change except maybe slightly differing load times.
DNS should update within the next 30 minutes to reflect this change.

If you notice graphics missing or something of the sort - please contact support immediately.]]>
<![CDATA[vnode testing, relay boards, cooling upgrades etc.]]> https://pulsedmedia.com/clients/index.php/announcements/285 https://pulsedmedia.com/clients/index.php/announcements/285 Sun, 03 Nov 2013 17:15:00 +0000 We did an initial version of vnode final testing software - this basicly boots the newly built hw node to see if it's alright. It's really rather simple, and last step on testing.
Also we further developed our testing procedures for new HW nodes to ensure when we rack a stack, each and single node is ready for production.

On the remote reboot relay boards a bit of progress has been made, and we'll be prototyping more the next week to maybe finally bring that online for a set of 16 hw nodes.

We also did some cooling upgrades on our room, nothing fancy, basicly utilizing the readily available cool and clean air available to us. Basic air circulation to lower the cost of cooling. It was a big success, and the AC is now turned on far far less. We will be continuing to upgrade the air circulation in near future with some extra high capacity fans, targeting full room air circulated every 2mins or less.

Some stacks had vnodes migrated to other stack, so we can take the stacks down for much needed repairs. Most of the repairs are just NIC replacements, bad models or broken, but a few have CPU or BIOS issues preventing their use as a vnode. One such stack has already been taken back into production successfully.

New model of big storage units entered into production, with better utilization of SSD. This storage node has 60Tb of raw storage combined with 1Tb of SSD. So far performance figures have been good, we'll know more when it's more utilized, and will show the direction to take in further big storage units. In the meantime we are trying a larger size of the Solo type storage, doubling the quantity of disks, multiplying amount of RAM.

Next week 4 big shipments of hardware is expected to arrive, tens of motherboards, hundreds of gigs of ram, PSUs, tens of cases etc. it's going to be really busy couple weeks assembling, testing and racking all of that!

It's going to be an exciting month! So much more nodes will enter into production.
The past month was mostly wasted by some logistics issues, a multitude of components deemed unusable for us, some firmware failures etc. but all of that is now history, and we are hoping the Month of November is our first mass production month.

]]>
<![CDATA[Vnode ICMP ECHO REPLY aka PING]]> https://pulsedmedia.com/clients/index.php/announcements/284 https://pulsedmedia.com/clients/index.php/announcements/284 Tue, 29 Oct 2013 22:03:00 +0000
]]>
<![CDATA[SSDs even in RAID: So overrated]]> https://pulsedmedia.com/clients/index.php/announcements/283 https://pulsedmedia.com/clients/index.php/announcements/283 Mon, 28 Oct 2013 20:39:00 +0000
Now we are doing final testing with bigger SSD RAID array, with 25% of space left unprovisioned, and using only ~30% of the capacity to tasks which most require that performance boost - and it's still not looking worth the effort and cost. Infact, the only scenarios i've seen SSDs actually perform anywhere the promise land are single drive and RAID0, never as part of a cache, but as sole storage.

Therefore, we have purchased a set of 146Gb SAS 15k drives for testing purposes - if these provide the kind of performance boost we are looking for in a RAID10 setup. Ironically, at best you can get 6x SAS 146Gb drives for the price of 1 SSD drive. 15k performs at roughly 350 IOPS no matter what - stable performance like all magnetic drives. That means, with proper setup we gain about 2100 IOPS on the price of 1 SSD drive - which we have not seen to provide more than 500 IOPS in production, at the cost of higher electrical consumption and less ports for the big storage.

16x 7200RPM provides about 1920IOPS, but in RAID50 setting (8xRAID5 + 8xRAID5 in RAID0) you get approximately 85% of that leaving you with roughly 1600 IOPS *max*, in practice more like 1500 IOPS. So in a chassis of 24 drives, adding 8x SAS 15k will add 2800IOPS, in real world probably 2500 IOPS, almost tripling the performance, of course, only when properly setup.

We will see how that goes, we get to do some preliminary tests in end of November when sea freight container arrives with new storage chassis's.

]]>
<![CDATA[Getting a DC running: Way harder than anyone would expect]]> https://pulsedmedia.com/clients/index.php/announcements/282 https://pulsedmedia.com/clients/index.php/announcements/282 Mon, 28 Oct 2013 00:22:00 +0000
Granted, we did not take the easy route - do it with money. First: Our customer base is not like that, secondly we want to remain as self financed as possible.

Things have taken months longer to get up & running, and many things takes a lot of time, from small things such as individual power connectors to larger things than "how do we route this huge bunch of network cabling??" are all taking much more than expected.

Unexpected things wind up quite expensive, we have spent on Cat5e/6 network cabling alone to the tune of 1500€, power plugs and misc connectors to the tune of 800€ at wholesale prices. And these are the small expenses.

We are now at close to 60 nodes up & running, and readying to get tens up & running simultaneously, with about 60 nodes on backorder, 15 racked and ready to be switched up, 10 nodes waiting to be put on rack. That's about 145 nodes total we are targeting to have up & running by mid-december, most of which online by mid-november. It takes time to put more storage online, since we virtualize the storage.

It's a lot of work - insane amounts of work.

Gladly we are nearing a time when mass production can begin and we can start to concentrate on making things better instead of just getting online.

I'll be trying to post more updates in the future.

-Aleksi]]>
<![CDATA[Bitcoin payments fixed]]> https://pulsedmedia.com/clients/index.php/announcements/281 https://pulsedmedia.com/clients/index.php/announcements/281 Thu, 17 Oct 2013 13:35:00 +0000 This has been now fixed, and all payments since then have been manually entered into the system.

]]>
<![CDATA[Espoo: New storage nodes, quite a bit of hardware inbound]]> https://pulsedmedia.com/clients/index.php/announcements/280 https://pulsedmedia.com/clients/index.php/announcements/280 Sun, 13 Oct 2013 13:31:00 +0000 All 6 solo nodes are now up & running, and almost fully provisioned. Next big one has been installed, and software setup and testing is beginning. This new one will include automatic SSD storage tiering, so it will require a bit of extra testing etc. before enrolling fully in production. This storage node also ramps everything a notch or two higher in all regards of hardware. Let's call that one Jabba3.

We are already planning on Jabba4, which is a bit different due - this one will experiment with custom storage chassis in the backblaze pod style. For Jabba5, chassis is already on order and on sea freight pending delivery at end of november. For sea freight arriving at end of January/Early feb, we are planning to get 3 20-24 disk storage chassis, and couple Rackable Systems 3U 16disk arrays for testing.

If the Jabba3 software is a success, we will start utilizing 4Tb disks with even heavier SSD utilization than before. That will finally start realizing the level of operational cost we have been targeting, while more than realizing the performance targets.
We are also planning to bring online 2 more Solo type storage nodes, but with a few more disks than usual.

We are also toying the idea of building almost SSD only storage node for OS data only, it would feature several Tb of SSD, and smaller size magnetic drives in RAID10 to maximize performance.

We got 27 nodes waiting to be brought online, 25 of which are waiting for new PSUs, for which we will probably source locally overpriced ones to get them online ASAP. We will roll them into production slowly to see how the new storage model copes. All 27 of which has already been sold and pending delivery to customers.

Some 20 nodes are already purchased and waiting for delivery, and at end of month we will order a bunch of the AMD E350 mobos + latest gen Atoms (~30 total), along with a quantity of older model Atoms for the 2G series. We are targeting to bring up next month up to 50 nodes online - depending upon the storage progress.

We also have inbound on sea freight a high end Dell Dual Quad Xeon, 72Gb ram server for testing - if the power to performance to ram ratio is good, we might start offering this model of server with 1Gbps unmetered at beginning of the year.

Right now by far the biggest expense is the storage - to maintain high performance we need to seriously overshoot the performance characteristics, but once we get the software and hardware designs honed in we are hoping to reach a sensible storage vs. nodes cost ratio.

Right now the operational costs are FAR FAR more than the revenue from Espoo DC, we are trying to ramp up the production schedule of new nodes as much as we can. This is not easy work, there is so much things which needs development in both software and hardware, and we actually need to come up with new hardware designs to be able to compete with big budget DCs, thinking outside of the box is a must to be able to come to a sensible cost for the services. Some big budget DCs even have almost free electricity, and their space costs are a tiny fraction per m2 compared to ours, and to overcome these obstacles requires a lot of creative thinking.

Current bandwidth usage is very low as well, we are happy to say that we are operating BY FAR uncongested network: http://i.imgur.com/8X95di8.png
In the graph they are vice-versa. so inbound is actually outbound. Measurement is from cogent side port.

]]>
<![CDATA[Regressions after update on our billing]]> https://pulsedmedia.com/clients/index.php/announcements/279 https://pulsedmedia.com/clients/index.php/announcements/279 Thu, 10 Oct 2013 03:07:00 +0000
View invoice could give an error "okpay" module missing DESPITE this module being disabled, this has been fixed.

Affiliates page was showing only a few referred -> there is now pagination on it, but the old template we were using didn't include the pagination links, so we changed the template to default hoping for no further regressions.

Let us know immediately if you see any errors etc. so we can fix them ASAP.]]>
<![CDATA[Password resets]]> https://pulsedmedia.com/clients/index.php/announcements/278 https://pulsedmedia.com/clients/index.php/announcements/278 Fri, 04 Oct 2013 18:52:00 +0000 https://pulsedmedia.com/clients/pwreset.php

We are in progress to change service passwords for seedboxes now. You will receive new login details e-mail after your password has been reset.]]>
<![CDATA[WHMCS Fatal security flaw exposed]]> https://pulsedmedia.com/clients/index.php/announcements/277 https://pulsedmedia.com/clients/index.php/announcements/277 Fri, 04 Oct 2013 15:59:00 +0000 WHMCS had a very serious security flaw published yesterday.

In question is a SQL Injection where any string is passed directly to the DB without sanitization if user input is prefixed with certain string. Sounds almost like an intentional backdoor.

WHMCS sent a security advisory last night, but this flaw was exploited before we could react to that advisory.
We had to restore a backup just prior to the attack - a few transactions may have lost, we will manually input them by tomorrow, but if you don't see your payment by monday, please open a ticket.

As a precaution, all WHMCS passwords and service passwords will be reset over the weekend. This is purely a precaution, we do not have evidence of any password leakage, but it's better to be safe than sorry in this instance.

You will receive a password reset e-mail for your service(s) and billing portal via e-mail soon.

]]>
<![CDATA[Super100 supply woes solved]]> https://pulsedmedia.com/clients/index.php/announcements/276 https://pulsedmedia.com/clients/index.php/announcements/276 Thu, 03 Oct 2013 15:25:00 +0000 Super100 has had immensively long setup times past few months, now that has been partially solved.

We have managed to solve issues with our old supplier, which provided suitable servers for the Super100 series. Some new servers will be acquired with them to keep the queues shorter. Main focus is still getting these up & running on our own hardware however, so what this means that the queue simply gets shorter, and sometimes even eliminated as we work on getting more of our own running, and get only servers from the 3rd party supplier when the queue is becoming too long or no new servers in sight.

 

]]>
<![CDATA[Reminders about reaching for support]]> https://pulsedmedia.com/clients/index.php/announcements/275 https://pulsedmedia.com/clients/index.php/announcements/275 Sun, 29 Sept 2013 16:59:00 +0000 It seems many of you need a reminder about how to make a support ticket, and support response times.

We might have days with more than 200 tickets, some of which might take an hour or longer to solve. and when there is tickets which do not explain the problem, a multitude of them etc. it can become extremely irritative.

First of all:

We are human beings as well, and we are a small team, we do need some sleep as well, time off, have dinner etc. which means that we cannot cover every single minute of a day always.

During weekends we work on skeleton staff - only critical stuff is handled, usually no service provisionings and definitively no billing support etc. which is not time critical.

Making a ticket seems to be hard. Very hard at times.

Many of you send us tickets like "not working" - what is not working? What needs fixing? These tickets get largely ignored because testing everything is just not possible every single time a ticket like this is opened, usually the same people open multiple tickets: It severely slows us down.

So please, let us know WHAT is not working in detail, what error message you are receiving, what you are not seeing or what you are seeing which is not supposed to be there.

We do not provide support for autodl-irssi or other power user features - if we don't have a KB or Wiki article about it, you seriously need to google and learn how to use it. This is irssi in general (Autodl-irssi is actually a plugin script for irssi), shell usage, screen usage etc. We provide support for the web gui, web client, pulsedBox, FTP, SFTP etc. the normal, every day features everyone uses.

We are not teachers. Goes with the power user features - we cannot spend hours per user wanting to use these but do not know how to, to teach them the most basics of irc, irssi, shell, linux, terminal, screen etc etc etc. These can become lifelong topics of learning. What we do however, is try to create KB and wiki articles for the most commong questions.

Do not ever make multiple tickets over the same issue - ping the existing ticket if you need to. Keep it short and simple.

Do not send in a ticket 5 mins (or even 3hrs) after service purchase "where is my server?" - We are working on it, but since we do it manually, your ticket just wasted the time which could have been spent on service provisioning. And we usually do service provisionings in batches to sort them out fast and efficiently.

Do not ask us to provide trials or free seedboxes - we have 14 day moneyback guarantee and free seedbox offering which is given on a random basis - read the terms instead.

To open ticket via e-mail: You can just send us an e-mail - but do it from your registered e-mail address, otherwise we have no idea what you are talking about if it's account or service specific question. Also e-mails sent to support e-mail from nonregistered e-mail address will be rejected completely.

 

What is IRC, how to use it?

IRC is not 24/7 live chat with immediate response. It's not a support channel with a person constantly waiting just for you. It's a hangout, you can get some support for generic questions, but no account specific support.

If you come to IRC and ask a question, please wait patiently, likely someone will reply to you once they check up on IRC. IRC is a lot like Skype, MSN etc. you simply keep it open on the background and check in now and then - in some cases, people might be away for weeks.

Do not PM or /msg staff, and especially if you do, don't then just quit IRC after 1min waiting. Nothing is more irritating than getting 20 PMs to close every time you check on IRC, and if you reply to them, 80% of them have already left and you just wasted 1hr replying to things which never reaches whoever made the question.

IRC can be a nice hangout and for general chat - but if you need priority support, the only proper way is to open a ticket.

]]>
<![CDATA[Traffic usage / fair share]]> https://pulsedmedia.com/clients/index.php/announcements/274 https://pulsedmedia.com/clients/index.php/announcements/274 Wed, 18 Sept 2013 13:47:00 +0000 It has again come to our attention that people have a really hard time following fair share rules, multiple servers were again spotted were single user is using 90%+ of bandwidth, not just for the day, but for weeks.

Several users are now going to get a ticket about the issue - if people don't start paying attention we need to take harsher action than just notifying and at maximum limiting upload slots.

Everyone on a shared service has equal right to the bandwidth available for the server - and it's simply not right for a single user to be using 90%+ constantly and not letting others get any bandwidth.

Super100 series is already being planned on having traffic limits in near future when the series starts migrating to Espoo to our own servers when the series gets upgraded for 1Gbps downstream speeds.

]]>
<![CDATA[Super100 available again]]> https://pulsedmedia.com/clients/index.php/announcements/273 https://pulsedmedia.com/clients/index.php/announcements/273 Sat, 14 Sept 2013 16:45:00 +0000 Super100 series offerings are again available, full backlog was just cleared for this series of offering!

Infact, we have a few spots remaining over for immediate setup.

]]>
<![CDATA[Espoo: Slow progress]]> https://pulsedmedia.com/clients/index.php/announcements/272 https://pulsedmedia.com/clients/index.php/announcements/272 Fri, 13 Sept 2013 12:47:00 +0000 Due to flu progress at DC for this week has been extremely slow - almost no progress at all.

Next week the hardware router module should arrive, and couple storage nodes are ready to be taken into production, along with couple stacks of nodes next week.

 

]]>
<![CDATA[Espoo BW usage]]> https://pulsedmedia.com/clients/index.php/announcements/271 https://pulsedmedia.com/clients/index.php/announcements/271 Sat, 07 Sept 2013 13:19:00 +0000 Espoo bandwidth usage has reached new heights by changing to Intel CPU, despite connections still aren't as fast as they should be. New records are being broken daily.

We are waiting for a module for the new switch, no exact ETA, but looks like max 2 weeks for it to arrive. In the meantime we will continue prepping more hardware to get online.

Currently there is couple cages racked and ready to be deployed, two storage nodes completely empty and waiting. Next week we should receive a new 20bay chassis for which we will use Adaptec raid cards with BBU + SSD Cache, and likely do a RAID6 on 20 drives plus 5 or 6 SSD drives on RAID6 and tier the storage utilizing btier. It's a little bit of an experiment but one we are hoping to be very wortwhile our efforts, on that node production will be started up slowly to see how it goes.

We also have on order a significant number of new nodes.

]]>
<![CDATA[New router: SFP+ module unsupported]]> https://pulsedmedia.com/clients/index.php/announcements/270 https://pulsedmedia.com/clients/index.php/announcements/270 Thu, 05 Sept 2013 18:38:00 +0000 The SFP+ modules we got were supposed to be Dell compliant - this was not the case, and the module is not working. We are trying to get a new module tomorrow, hopefully we can find a suitable module from local vendors.

]]>
<![CDATA[Espoo new router arrived]]> https://pulsedmedia.com/clients/index.php/announcements/269 https://pulsedmedia.com/clients/index.php/announcements/269 Thu, 05 Sept 2013 14:20:00 +0000 New router arrived today, we will be testing it shortly, and hopefully taking it into production by tomorrow afternoon.

]]>
<![CDATA[Espoo router plan, other updates]]> https://pulsedmedia.com/clients/index.php/announcements/268 https://pulsedmedia.com/clients/index.php/announcements/268 Mon, 02 Sept 2013 14:25:00 +0000 We are ordering a hardware router, a nice Dell PowerConnect L3 unit. It should arrive within 2 weeks.

In the meantime, we will still try to solve the software router, but not keeping our hopes up for the moment.

 

A LOT of disks arrived, which means after new switches has been configured we can roll out a lot of new nodes - however, we will not do that until network speed is good, then we will roll out at once tens of new nodes and tens to follow in a week.

By end of the month we want to reach 100 nodes at least online, ie to double our production.

 

]]>
<![CDATA[router maintenance]]> https://pulsedmedia.com/clients/index.php/announcements/267 https://pulsedmedia.com/clients/index.php/announcements/267 Sun, 01 Sept 2013 12:36:00 +0000 Router maintenance was completed and some speed increase was achieved. Unfortunately it was achieved in a unexpected way and we still consider performance suboptimal.

We will be looking into it again, and potentially changing to hardware router afterall.

]]>
<![CDATA[Espoo router maintenance in a few hours]]> https://pulsedmedia.com/clients/index.php/announcements/266 https://pulsedmedia.com/clients/index.php/announcements/266 Sat, 31 Aug 2013 15:52:00 +0000 We will be doing router maintenance at Espoo in a few hours, this will cause intermittent downtimes. Completion ETA is unknown.

 

]]>
<![CDATA[Espoo storage, estimates]]> https://pulsedmedia.com/clients/index.php/announcements/265 https://pulsedmedia.com/clients/index.php/announcements/265 Thu, 29 Aug 2013 13:44:00 +0000 Lots of new disks arrived, this means new storage nodes will be built within couple of days and a lot of new servers will enter production during the weekend after router fixes.

By the looks of it we will be spending Saturday afternoon onwards fixing the router and getting new nodes online.

Currently we have racked waiting for delivery:
8x2G
8x4G
2x8G

Waiting assembly:
10x4G Atom
5+ Misc Atom

On order:
15xAMD E350
9xAtom D410
Few misc atoms
24x3Tb Seagate Barracuda Disks
5xMicron SSDs for booting
5xOCZ SSDs for caching
etc.

]]>
<![CDATA[Espoo routing issues continue and upstream issues resolved]]> https://pulsedmedia.com/clients/index.php/announcements/264 https://pulsedmedia.com/clients/index.php/announcements/264 Wed, 28 Aug 2013 13:59:00 +0000 Espoo routing issues are getting worse - basicly, the worst option, we have been wondering how come it seems always about the same amount of upstream or downstream bandwidth is in utilization - it really is that for some reason it won't route more traffic, and as effort continues to grow single connections gets slower.

We are going to swap the routing software for a commercial application if solution cannot be found within the next few days - there shouldn't be an issue like this with Linux, the router itself sits at about 99.9% CPU idle but won't route more traffic. Ultimately, we will swap to Dell Powerconnect gear if no other solution can be found.

Upstream provider issues resolved

Last of services has been brought online, except for a very few dedicateds which got cancelled or no user communication still.
95%+ has been up since sunday, just a few accounts remained to be handled during the past couple days.

 

]]>
<![CDATA[State from upstream provider issues]]> https://pulsedmedia.com/clients/index.php/announcements/263 https://pulsedmedia.com/clients/index.php/announcements/263 Tue, 27 Aug 2013 14:58:00 +0000 All of 2009+ was restored quite quickly.

PDS + Dediseedboxes are pretty much complete, a few servers remain a question.

2012 series is half way there.

That provider changed the terms once again during the weekend to make matters even worse. Very nice of them. Fortunately, only few months remain until we can move production out of there, and this kind of stuff is exactly the reason why we've been moving out from there slowly for a year now.

]]>
<![CDATA[Espoo, supply chains, september estimates]]> https://pulsedmedia.com/clients/index.php/announcements/262 https://pulsedmedia.com/clients/index.php/announcements/262 Mon, 26 Aug 2013 12:18:00 +0000 For our own hardware supply chain is getting sorted out, we've had issues to get significant numbers of hardware delivered and online in the past month, and we are now settling to several hardware vendors.

We are also approaching this from the point of leasing some of the hardware since the need for new hardware does exceed greatly what we can get on our own budget alone for the next month.

For the next month we have already scheduled to get online at least:
15x2G
10x4G
15x16G

Most of these has already been sold and assigned for. We need to build the storage to go with these as well, which is approximately 110Tb and 10k IOPS+ constant.

If we find the development time we will be testing on small scale some distributed storage options as well, but how they work are not ready for prime time due to the need to triple to quadruple the amount of storage we have, so it will remain small scale testing just to see how the performance works out.

We are also going to test some new pieces of hardware this upcoming month to see if those will drive up the IOPS capabilities how much higher, along with some new software solutions for automatic storage tiering.

So it's going to be a another busy month.

We are also working finally on our own blade chassis and estimate the first prototypes will arrive by end of the month.

Infrastructure investments has been much much larger than anticipated, storage expenditure has been much much larger as well than expected. It also has come to our attention that offering a 2G model might not be in our best interests - RAM costs are so low right now, and RAM will provide such a boost to I/O performance, so we are considering the options in the upcoming weeks.

 

]]>
<![CDATA[2009+ fully operational again]]> https://pulsedmedia.com/clients/index.php/announcements/261 https://pulsedmedia.com/clients/index.php/announcements/261 Thu, 22 Aug 2013 16:56:00 +0000 All 2009+ series services has been handled in regards of the upstream provider issue, some got replacements, very few prorata credit refunds (X2) to choose a replacement service. Almost in every single case of migration user was migrated to a newer, better server.

We are working now on bringing remainder of dedis and 2012 online.

NOTE: This does not affect all our services, about one third was affected. We are really sorry over the past 12hrs, and this is exactly the reason we are working to get to our own hardware - stability and reliability is what we strive for.

]]>
<![CDATA[Continuing to get services back online]]> https://pulsedmedia.com/clients/index.php/announcements/260 https://pulsedmedia.com/clients/index.php/announcements/260 Thu, 22 Aug 2013 15:54:00 +0000 We are working hard to get services up again any way we can. We are waiting a response from many of dedi customers who's renew date is nearby of their wanted action etc.

Some servers will altogether be replaced with new ones as that is the best way to do things right now. We are sorry for all the inconvenience, rest assured, we are working constantly to solve the situation.

This situation is solely caused by the French upstream provider giving us no notice what-so-ever of the changes in terms, and giving us no flexibility on solving this. Infact, last time we called them they were quite angry, it felt like the customer care rep wanted to attack through the phone.

All these issues is why we are working towards getting to our own hardware - so that quality and reliability is on our own hands - not someone else's, especially someone as far away as France.

]]>
<![CDATA[No flexibility on the issue, PDS extra renewals, migrations]]> https://pulsedmedia.com/clients/index.php/announcements/259 https://pulsedmedia.com/clients/index.php/announcements/259 Thu, 22 Aug 2013 13:55:00 +0000 Response has been gained, no flexibility can be done on the issue other than "pay later and continue for 2 months". In other words - no flexibility unless servers will be continued for at least 2 months.

It seems like we have no option to renew under these terms for just 1 month, but will find that out.

PDS/PDS13/DEDISEEDBOX/1GBPS/100TB:
If you make an early renewal for server, please use add as credit if no open invoice exists, then when the invoice is created you can apply the credit or ask us to apply it - helps in managing it. It's faster to manage that way and assures everything gets correctly accounted for.

Migrations
We are doing a lot of emergency migrations right now for 2009+ series, you will receive couple e-mails if yours is on the emergency migration. For 2009+ we happened to have plenty of stock, next we are checking up 2009+ Starter, X2, and finally 2012 series. Value, Super100 is on another DC without these issues.

]]>
<![CDATA[Update on downtime]]> https://pulsedmedia.com/clients/index.php/announcements/258 https://pulsedmedia.com/clients/index.php/announcements/258 Thu, 22 Aug 2013 12:52:00 +0000 Signs of flexibility, but not the kind we need, they decided that no payment now, but 2 months month later, ie to force all servers to be kept for 2 months, did we need them or not. We explained to them that this is not going to work due to how our business operates, simply because we have no idea if our customers will renew for that long, no idea what servers are needed in two months time.

]]>
<![CDATA[Downtime continue]]> https://pulsedmedia.com/clients/index.php/announcements/257 https://pulsedmedia.com/clients/index.php/announcements/257 Thu, 22 Aug 2013 11:59:00 +0000 Upstream provider is so far unwilling to work with us - in the meantime we have begun migration process to move out of those servers. They have never been known off their flexibility, and it gets worse by the month. This time flexibility is insanely bad - customer care infact got angry that we were asking him to do his job and try to get some kind of flex on this or something. I have to call there every 30-60mins to verify they are on the task - still only thing they will do is e-mailing to headquarters/customer care mailing list.

This time in question is the renewal terms, our process has worked always on a weekly basis - check weekly for cancellations, renew those servers weekly, this has helped to minimize the price of service for our customers as there is less extra time we need to pay for, less things to check up on etc.

Last night was time for weekly renewal of servers - but this time around the minimum term was 30days. Since we didn't know this upfront we didn't reserve that kind of sums on account, as we try to keep balance quite low for security purposes, on top of that, most dedis are due for renewal well before 30days is past, many servers are to be migrated out etc etc etc. making this extremely complicated. A bunch of servers we did renew for a month, dedis which have due date 18th of September or later, multiple shared servers etc. but a large portion remains to be handled still.

We will be communicating how this issue progresses.

]]>
<![CDATA[Downtime on a big number of servers]]> https://pulsedmedia.com/clients/index.php/announcements/256 https://pulsedmedia.com/clients/index.php/announcements/256 Thu, 22 Aug 2013 11:19:00 +0000 There is currently downtime on a big number of servers due to changed terms by the upstream provider without any kind of warning - we are currently trying to get a resolution to this situation, but it seems it is kind of hard to find a decent resolution within an hour or two.

Pretty much, the upstream provider customer care is not interested in solving this issue neither - but we will keep contacting them if an resolution has been found.

]]>
<![CDATA[Espoo router]]> https://pulsedmedia.com/clients/index.php/announcements/255 https://pulsedmedia.com/clients/index.php/announcements/255 Tue, 20 Aug 2013 08:11:00 +0000 We need to update Espoo router soon, it seems there is an software issue causing single connetions for some machines are not running fast enough. Basicly, testing from the router we achieve 10x speed compared to some of the client nodes. Some client nodes achieve good speeds, which is is very wierd.

We will do an software update and see if this solves it, giving for all nodes good speeds on single connections, if not we will upgrade to Dell Powerconnect gear by end of year.

Even if that particular node cannot achieve good single connection speeds it has been noted that with enough effort it can exhaust it's link no problem.

]]>
<![CDATA[Espoo status update]]> https://pulsedmedia.com/clients/index.php/announcements/254 https://pulsedmedia.com/clients/index.php/announcements/254 Thu, 15 Aug 2013 14:25:00 +0000 Solo3 (Storage): All components swapped, but still giving CANNOT IDENTIFY DRIVE errors -> PSU probably broken, will be swapped today and see if we can get this to production.

Solo4 (Storage): Getting built in the nearby days, this is a new config which might require extra days to test. This will have 10x3Tb Drives and a Samsung 840 PRO 256Gb SSD Cache drive.

Solo1 (Storage): SSD caching enabled in write around mode -> seems to give roughly 1 magnetic drive worth of performance boost when utilizing eio. Not happy with the performance increase. Planning on tiered storage for future.

Network infra:
2xExtreme Networks Summit200-48 switches purchased for 100Mbps.
2xCisco Catalyst 48port + 2xGbic switches acquired for 100Mbps, and the according Gbic modules purchased.
Still got 1xExtreme Networks Summit200-48 waiting to be deployed.
1xBrocade switch arrived at mid warehouse for the SAN infra (48x1Gbps + 4x10GbE)

Contemplating on acquiring a Dell Powerconnect 6224F -> 24xSFP+ 10Gb ports, and same series 48xGbit + 2xSFP+ switches for december delivery from the states.

Purchased a lot of network cables, over 600euros worth, now we've spent some 900 euros on cables alone! :O On top of that all the accessories for cable management.

3xCore i3 is waiting to get into production. 4xAMD E350 is waiting to enter into production. 4xAtom D410PT waiting to enter into production. Misc cage of 2 & 4G Atoms (D510, D525, n2800), 8 units, racked and waiting to enter production.

Still waiting for racking & wiring is a cage of 8xAtom D510/D525 with 2G each, next cage of 8 Atoms is waiting for misc accessories to be built.

We are starting to run out of ports on internet access side -> adding one of the Summits during weekend.

Shipping to warranty 3x3Tb Drives + 2xCore i3s today. Waiting from warranty 3x3Tb drives. Suspecting 2 more drives waiting for rechecking before sending to warranty. If they indeed are failed, that is already 8 drives sent to warranty (yikes!)

Waiting from the states:
1xBrocade switch, few SSD drives, about 20 nodes, lots of ram, couple highend seasonic PSUs, Adaptec highend SAS/SATA Controllers (16port each!) etc.
One vendor today shipped a batch of 10xAtom D2500, 10x92mm cooling fans, 3x120mm fans and 6xPSUs.

Currently on stock about 20 more pico PSUs, 40 NICs (each node needs 2), 10+ 3Tb drives, few 2Tb drives (spares), and a bunch of 1Tb drives (will probably enter produciton ever), 18x120mm fans, some 30x80mm Papst fans etc etc. Today receiving a shipment of correct plugins etc. accessories for assembly.

Contemplating on a purchase of a lot of 16x AMD E350s. There is cheaper models available as well, but they are out of stock right now, so considering these more expensive ones.

End of the month we should receive first 20bay hotswap chassis for review, anxious to receive it. Contemplating tiered storage for it. To do that however, i think we need to intentionally wear some SSDs first so that their failure times will not be close to each other.

We are also wondering how far will the cooling last for us, but fortunately, during winter is scheduled upgrade for secondary cooling unit. Our current cooling unit is just 9.4kW cooling capacity, but since it's the only one we do not dare to load it much beyond 6kW, after adding the second one our capacity should be at around 15kW or more. It does ease however that we run the DC "hot", at just shy 30c ambient (fluctuates 28-29). It's nowhere as hot as Google does however, and our metric is disk temperature -> keeping them below 40c is what we target.

Lots of software work remains to be done, most of the monitoring, new distro templates, better identification of physical units and better management of them, automatic per node monitoring + reboot (along with the hardware!).

Storage is stable and performing. We are seeing daily higher peak BW usage.

 

Immediate backlog is about 30 nodes, and 10 migration nodes.

]]>
<![CDATA[PDS Finland series updated]]> https://pulsedmedia.com/clients/index.php/announcements/253 https://pulsedmedia.com/clients/index.php/announcements/253 Thu, 08 Aug 2013 16:46:00 +0000 PDS14 / PDS Finland offers has been updated.

8G and 16G models got changed for AMD E350 and lower price!

Storage addons were added!

See all the details: http://pulsedmedia.com/personal-dedicated-servers-finland.php

]]>
<![CDATA[Network issues page]]> https://pulsedmedia.com/clients/index.php/announcements/252 https://pulsedmedia.com/clients/index.php/announcements/252 Thu, 08 Aug 2013 13:41:00 +0000 Since we have our own network now, we are starting to move network issue information to the network issues page, this will include all kinds of things happening at espoo, including network, storage, per node issues etc.

You can view it at: https://pulsedmedia.com/clients/networkissues.php

For now we have scheduled Jabba1 migrations.

Let's see if this way of informing is better than using announcements for this :)

]]>
<![CDATA[istgt crash UPDATE 4]]> https://pulsedmedia.com/clients/index.php/announcements/251 https://pulsedmedia.com/clients/index.php/announcements/251 Wed, 07 Aug 2013 19:05:00 +0000 istgt had crashed and forced a reboot (only way to fix istgt connection failures!), which resulted in some ZFS regressions.

Letting the storage to resilver for a while before onlining everything again.

Work has begun to bring solo1 online.

UPDATE

Jabba2 array1 decided to resync as well, ETA for completion at current speed is about 5 hours.

However, we will just increase min sync speed and online things far before that.

UPDATE2

Solo1 is building it's array, ETA ~7hrs

Jabba2: We are using Qlogic/Dell NetXen card for it's 10Gb, and a driver issue makes it unavailable every now and then on reboot, took a while to recognize that it's yet again failing to load and that's making us take longer than expected in bringing things back online.

Jabba1: Solo1 will be imported to Jabba1 for migrations, we are unsure will we update the OS images for new IP or will we keep it like it is, going through Jabba1 for a while. Probably the latter for first couple of weeks.

UPDATE 3

Some nodes are up, for some reason nodes from Jabba1 ZFS pool are not getting online, and we are seeing some worrying things from that pool (3 disks resilvering while parity 2??). We'll know more shortly, might be just so badly overloaded due to the resilvering process.

Solo1 Array1: very huge variance in disk performance, swapping 3 of the 5 disks in an attempt to get more similar performance drives in the array.

Solo2: Being installed.

Jabba2 Array1: Still resyncing (resilvering), with limited speed, all nodes utilizing that are up & running tho.

Jabba2 Array2: Is doing initial sync (5x2Tb).

Recap: Storage entering online is two 5x3Tb arrays and one 5x2Tb array.
Solo3 might get built today as well (5x3Tb array)

UPDATE 4

We are manually running fsck on those machines which did not boot automatically - some of them have some corruption but so far every single one has booted up fine after doing manual fsck.

We are soon fully back in production.

Migrations will begin by tomorrow on the newly built arrays.

]]>
<![CDATA[Espoo storage plans]]> https://pulsedmedia.com/clients/index.php/announcements/250 https://pulsedmedia.com/clients/index.php/announcements/250 Wed, 07 Aug 2013 15:07:00 +0000 We are going to add several new arrays during the next few days and several new storage servers. We are now targeting smaller individual size with individual SSD caches to leverage more stable performance, and less effect on rest of the nodes.

Solo1:
5x3Tb 7200RPM with small SSD cache drive, RAID5
Jabba2:
5x3Tb 7200RPM with small SSD cache drive, RAID5
Solo2:
5x2Tb 7200RPM, RAID5

Jabba1:
Will import the array from Solo1 to migrate remainder from the initial ZFS pool which is underperforming.

This will vacate a bunch of 3Tb drives, which will be used to create a RAID6 array on Jabba1 with a 256Gb SSD cache in writeback mode. We are likely going for 12x3Tb in RAID6 or 2x5x3Tb in RAID5 for the Jabba1 after that in couple of weeks.

We have reserved 6 of the small "Solo" type of storage servers, these have each 4G ram for cache and 2xGbit links, able to sustain ~220M/s disk speeds (which is ~triple of what an array like that can sustain in heavy random I/O), and each have 6 drive spots, 1 of which will be used for a SSD boot + cache drive, and remainder for a RAID5 array.

Each of these "Solo" arrays will be used for max of 8 nodes, depending upon the setup and target, most likely for 5-6.

The initial Jabba1 pool has suffered now 38% disk failure rate - all of them were new drives, so it was a very bad batch of drives which we got for the first array, this combined with ZFS basicly being falsely marketed as outperforming RAID5 has been the culprit for most of the issues.

Istgt as the iscsi target will be dropped, we will be testing soon can Risingtide Systems iSCSI target support the special characters in the passwords used, or should we manually fix all the vnode OS images for new passwords in order to migrate to it.

The past month performance and stability is completely unacceptable, unsustainable and incomprehensible. Murphy's law at play here, when it rains it pours. None in our team has ever experienced such a degree of wierd issues as we have had to endure during the past month, starting from failing new fans, power y-cables randomly loosing connection, sata cables giving issues, 38% first month disk failure rate from a single batch (while other batches have 0%), almost none of the software used working as expected and advertised causing random issues and the level of logging being so inverbose that they are useless in debugging issues in question.

But we have learned a lot during the past month, and are heading to the right direction, several key portions of the infrastructure are now very stable and we can build on that.

As a backup, we are setting several systems with local disks, just on the high end and just a handfull.

I'm fully confident that we reach full stability and mass production during august.

]]>
<![CDATA[cache fixed]]> https://pulsedmedia.com/clients/index.php/announcements/249 https://pulsedmedia.com/clients/index.php/announcements/249 Wed, 07 Aug 2013 14:08:00 +0000 Cache was fixed on the fly - infact, it was slowing down the array with too much activity being forwarded to the ssd drive.

 

]]>
<![CDATA[Flashcache, storage additions]]> https://pulsedmedia.com/clients/index.php/announcements/248 https://pulsedmedia.com/clients/index.php/announcements/248 Wed, 07 Aug 2013 13:06:00 +0000 After adding flashcache to one of the arrays it's performing extremely slow. It will be checked up on today.

It will lead to some nodes having downtime as we look into it, but we are hoping to keep it under 2hrs.

We will be adding several arrays today, and hopefully bringing a bunch of nodes into production today as well.

]]>
<![CDATA[storage maintenance over]]> https://pulsedmedia.com/clients/index.php/announcements/247 https://pulsedmedia.com/clients/index.php/announcements/247 Wed, 07 Aug 2013 03:54:00 +0000 sorry took a little bit longer than expected, did some additional hardware level maintenance at the go.

Servers are booting back online, and we are now targeting that only downtimes for the next several weeks are data pool migrations.

]]>
<![CDATA[caching addition -> short downtime max 15mins]]> https://pulsedmedia.com/clients/index.php/announcements/246 https://pulsedmedia.com/clients/index.php/announcements/246 Wed, 07 Aug 2013 03:22:00 +0000 we are adding SSD cache to one of the pools, this will cause a maximum of 15mins downtime.

]]>
<![CDATA[back online]]> https://pulsedmedia.com/clients/index.php/announcements/245 https://pulsedmedia.com/clients/index.php/announcements/245 Wed, 07 Aug 2013 03:08:00 +0000 all nodes were put to boot back online a few moments ago and failed disks replaced.

FS is giving for couple of nodes error of data corruption, but most likely fsck will handle that without data loss.

]]>
<![CDATA[espoo servers down]]> https://pulsedmedia.com/clients/index.php/announcements/244 https://pulsedmedia.com/clients/index.php/announcements/244 Wed, 07 Aug 2013 02:55:00 +0000 the initial storage pool server suffered yet another failure today - a extremely bad batch of drives now reaching 50% failure rate within the month.

the first pool is right now resilvering but FS is warning that data corruption may have occured, it will be brought back online shortly along with the nodes.

 

]]>
<![CDATA[storage maintenance over]]> https://pulsedmedia.com/clients/index.php/announcements/243 https://pulsedmedia.com/clients/index.php/announcements/243 Mon, 05 Aug 2013 19:38:00 +0000 maintenance is over and network speeds has been increased.

]]>
<![CDATA[Storage maintenance running long]]> https://pulsedmedia.com/clients/index.php/announcements/242 https://pulsedmedia.com/clients/index.php/announcements/242 Mon, 05 Aug 2013 18:04:00 +0000 This maintenance is taking longer as expected due to some driver issues etc. for the planned upgrades.

Exact ETA is unknown, but everything will be back operational within several hours.

]]>
<![CDATA[Storage maintenance right now]]> https://pulsedmedia.com/clients/index.php/announcements/241 https://pulsedmedia.com/clients/index.php/announcements/241 Mon, 05 Aug 2013 15:10:00 +0000 Storage maintenance for the next 45mins.

Almost all espoo nodes will be affected.

]]>
<![CDATA[Espoo storage update and planned maintenance coming]]> https://pulsedmedia.com/clients/index.php/announcements/240 https://pulsedmedia.com/clients/index.php/announcements/240 Mon, 05 Aug 2013 07:11:00 +0000 The storage cluster initial nodes have been built up now and we have been doing extensive testing with it.

We've worked on the software side of things, and finally we have found a way to deliver both performance and flexibility, and bigger scale stability testing will ensue soon.

We've managed to get raw device performance over the network without any special "hacks" etc. and in a stable manner, infact, with the new software on the backend some nodes has been working and in use for a while now.

We are starting now to build up the arrays, which are initially going to be 12Tb each.

We had to downgrade our redundancy level significantly to get the performance required for this to be pulled off, but we will increase the same redundancy level by doing smaller pools, so if a pool is lost it will affect a maximum of 20 servers or so.

Eventually we will also provide optional backup services on a per node basis.

There will be a maintenance starting today as we upgrade one of the SAN gateways to faster network access. The time when this is done is not exactly known, but the maintenance is done so that we will not need to do any maintenance on that particular machine for a long time to come. This will affect a fraction of the nodes running currently.

]]>
<![CDATA[vnode storage upgrade]]> https://pulsedmedia.com/clients/index.php/announcements/239 https://pulsedmedia.com/clients/index.php/announcements/239 Mon, 29 Jul 2013 23:36:00 +0000 All vnodes should feel a performance increase today since storage got upgraded by additional array of disks. We are setting up another array of disks tomorrow, and hopefully during weekend several more.

All vnodes will be migrated to the new arrays to vacate the current one so we can vacate the 3Tb disks from it, and do a nicely performing array from them.

Most initial arrays *will* get SSD caching backup, later on we will add SSD cache on a *as needed* basis.

We are working to build a sustainable solution for the future, and working to build a cluster of storage servers, which we initially build out, before building up, this means we will keep adding storage cluster nodes with minimal disks initially, before starting to upgrade them to bigger machines.

]]>
<![CDATA[gateway maintenance upcoming]]> https://pulsedmedia.com/clients/index.php/announcements/238 https://pulsedmedia.com/clients/index.php/announcements/238 Mon, 22 Jul 2013 19:51:00 +0000 Gateway maintenance during next 1½ hrs, will cause sporadic network outages.

]]>
<![CDATA[Maintenance over]]> https://pulsedmedia.com/clients/index.php/announcements/237 https://pulsedmedia.com/clients/index.php/announcements/237 Sat, 20 Jul 2013 22:00:00 +0000 Progress: ZIL, Nada, Zip.

redirecting efforts to build new storage bricks.

]]>
<![CDATA[Maintenance upcoming today]]> https://pulsedmedia.com/clients/index.php/announcements/236 https://pulsedmedia.com/clients/index.php/announcements/236 Sat, 20 Jul 2013 14:08:00 +0000 Today we are going to do maintenance on multiple parts.

The primary router will be changed to more suitable hardware, this will result in approximate 45mins downtime. Basicly we are downgrading it. Yes, downgrading. We over configured the gateway by vast margin, which results in more expensive, more power hungry than required, we will do a tiny downgrade on it, CPU processing wise it will remain the same which is the only thing that matters. Even during heaviest load so far we noticed less than 1% CPU peaks due to the routing.

Storage will be reworked today, due to flaws in ZFS design, it will cause intermittent downtimes. We will move a vnode at a time, which will mean this vnode is down for the duration of the copy operation, we will restart it once copied to another raid array, after which we build a new storage brick, and do another array, to which we copy from this moving array. Again, causing intermittent downtimes.

The storage work is likely to last 8+ hrs due to the sheer volume of data to be moved. After this maintenance we should expect to see multiple times the storage performance what we see currently, and this should be a setup to stay with for a long period of time and we start to rollup the storage network and cluster after we verify this gives us the "holy triangle of storage": Cheap, Fast and Reliable.

This time around we are going to rely on older, proven, well known FS: EXT4 + Raid5/6. Sure, we lose snapshotting, checksumming, inline compression, but via bcache we should achieve better usable cache for our systems and *solid* performance.

The problem is with the way ZFS was designed with only one goal in mind: Data integrity. This means that a RaidZ[1,2,3] volume only *has* single disk performance - despite being advertised by many (Including one big old time enterprise which has been always in forefront of industrial computing) to be even faster than Raid5/6. This simply is not true. Your only other option is to have weaker redundancy and loose 50% of your storage by doing mirrored pairs, or do multitude of RaidZ, and thus loose much more than 50% of performance. Neither of these scenarios are acceptable. If i have understood correctly, even with mirrored pairs you still loose 50% of performance. So this translates to roughly 2 options: 50% storage + 50% performance loss OR 20-25% storage + 80-90% performance loss.

Bottomline is: ZFS is good for backup and streaming loads, it has magnificent write speeds, but poor read speeds, cache warms up too slow, and cache expiry is vastly too long for our type of loads (hotspots change frequently), and very poor IOPS figures in RaidZ and cache wise. Also there is absolutely 0 pool reshaping features apart from expanding by swapping larger disks. The nice features such as inline compression, checksumming do not outweight the lack of performance unfortunately. In any case, on our load data volumes have 0% compressible ratio.

EXT4 + Mdadm RAID5/6: Raid5 achieves near hardware read speeds while suffering on write, Raid6 is not much slower than that, we expect to see 75%+ of raw hardware performance, while loosing 25% of the storage initially. EXT4 is overally well performing, especially with a little bit of tuning, EXT4 features are limited but what it does, it does well. Mdadm is slightly harder to manage (ZFS failed disks are very easy to replace!), and while write speeds suffer, write is not a heavy load for our use: Generally 1:25 write to read, meaning 25 times more is read than written!

Further, ZFS on Linux fails to compile on latest kernel, there is some hacks to get it compile but we haven't attempted this, while EXT4 is built-in the kernel. ZoL documentation is also extremely weak, and it still has some regressions which are worrying. EXT4 has no regressions, and documentation while not needed, is vast.

BtrFS: It's getting more mature by the day, reading up on the BtrFS site the only major drawback we noticed is the lack of needed redundancy levels, we'd like to use Parity 2 or 3. BtrFS is in the kernel, so we will look into it someday.

After we validate the storage is performing top-notch we start to roll out more machines again, we got tens of nodes in standby to be built, tested and configured, and several already up but not being utilized.

Also we are about to purchase a big lot of hard drives within several weeks. Initially for every disk we put online we can put 1-1.5 servers online, and add the missing storage on as required basis. This should quickly rise to 1:2-3 ratio as we grow the pool and add more SSD caching. Finally when storage starts to be utilized, we estimated the total need of 1.25 disks performance per node, but the scale out architecture we are planning for allows us to trivially keep on adding storage on a *as needed* basis, allowing us to offer these extremely competitively priced services.

1.25 disks performance per node means it's relative to modern 7200 RPM SATA disks, whether the performance (IOPS) is delivered via SSD or Magnetic drives.

]]>
<![CDATA[storage performance regression]]> https://pulsedmedia.com/clients/index.php/announcements/235 https://pulsedmedia.com/clients/index.php/announcements/235 Wed, 17 Jul 2013 21:34:00 +0000 ZoL refuses to use the cache drives, and for some reason even magnetic drives are performing abysmally on the storage server, so we are going on-site to reboot the machine and see if we can get that custom kernel going on for extra performance.

New hardware for new storage nodes are on order and will arrive by late next week, until more performance is achieved no new nodes will be provisioned.

]]>
<![CDATA[PDS14 delivery schedules, backorders, progress state]]> https://pulsedmedia.com/clients/index.php/announcements/234 https://pulsedmedia.com/clients/index.php/announcements/234 Wed, 17 Jul 2013 16:10:00 +0000 PDS14 setups will take atleast several weeks right now, and the amount of backorders is only estimated to grow over time.

Things are still so much work in progress that we cannot setup new machines any faster :( We already maxed out parts of the infrastructure and have to beef it up before new machines can be setup. That is going to happen sometime during next week by all likelyhood.

I think we get new storage nodes up & running by end of next week, it'll take several days of building, testing etc. before that is ready, and then we will put online a lot of servers each day.

We have currently such a great demand that this situation will remain for months to come by all likelyhood, we got more than 60 servers on backorders alone right now, and we need for our own needs a total of 300+ servers, it's likely we setup these on 80:20 ratio, 80% customer dedicated orders, 20% of our own.

A lot of development remains to be done on the management backend as well, remote reboot code is still largely done, identifying nodes and their image counterparts, managing the network storage etc. all needs a lot more effort put in.

We only got a handful of customer dedis operating right now, and this has given valuable feedback on where to improve our infrastructure, and the most important bits has been developed now in regards of building new servers.

]]>
<![CDATA[maintenance over]]> https://pulsedmedia.com/clients/index.php/announcements/233 https://pulsedmedia.com/clients/index.php/announcements/233 Wed, 17 Jul 2013 05:55:00 +0000 Maintenance over.

In the end we reverted back to old config, as iscsitarget was not providing the performance excepted and several authentication issues.

We will be looking at alternative options, but for the time being, storage will remain in configuration freeze, now changes will be made for a while to it.

]]>
<![CDATA[storage maintenance starts shortly]]> https://pulsedmedia.com/clients/index.php/announcements/232 https://pulsedmedia.com/clients/index.php/announcements/232 Tue, 16 Jul 2013 23:10:00 +0000 storage maintenance will start shortly, perhaps in less than an hours.

Everything is prepared so hopefully the downtime is only less than 30mins.

 

]]>
<![CDATA[storage maintenance today]]> https://pulsedmedia.com/clients/index.php/announcements/231 https://pulsedmedia.com/clients/index.php/announcements/231 Tue, 16 Jul 2013 15:19:00 +0000 In several hours there will be some storage maintenance on going.

We need to reboot all machines. Reason is on the storage side several drives have dropped in speed, which means the whole array is crawling right now. We have no idea why the SATA link is being renegotiated for lower speed, ALL sata cables are high quality and new. Further, Samsung 840 PRO SSD drive is again giving grief with hugely degraded performance, doing only 28M/s read peaks!

At the go, we will likely also update kernel, and likely finally change over to IET iscsi target.

All of this will take several hours and occur over multiple reboots etc. production will be restored occasionally momentarily as we prep the next change.

These storage maintenance breaks *will end* as soon as the cluster is up & running. We estimate that to happen early august.
At that point we can work on individual nodes without interference on the user facing side, as long as it's not something to do with the SAN gateway machine.

 

 

]]>
<![CDATA[some real world benchmarking]]> https://pulsedmedia.com/clients/index.php/announcements/230 https://pulsedmedia.com/clients/index.php/announcements/230 Tue, 16 Jul 2013 02:01:00 +0000 Tested system configs are Atom N2800, 4G ram and AMD E350, 8G Ram.

Both system show full bandwidth utilization, and the AMD one also heavy inbound traffic.

despite these, the GUI has *never* been this fast. I've never seen the filemanager load up so fast, nor never seen ruTorrent load up so fast. It's insane!

Tomorrow is some fine-tuning work to be done and some broader changes as well, from istgt to iet and getting the only performance drawback fixed: Latency. Sometimes there is high latency on system which is not utilized, on data which is not utilized. Intent is to fix this, trying out different disk schedulers etc.

Plus, using IET BlockIO mode should enhance things further as well.

Disk array is working fast as well, we are seeing very nice speeds with barely any load what so ever on the array, the SSD caches sit also mostly idle writing up new cache data -> Cache is never really getting warmed up yet either!

Let's hope as production goes forward and we start our work on the storage cluster things look this good!

 

]]>
<![CDATA[istgt issues]]> https://pulsedmedia.com/clients/index.php/announcements/229 https://pulsedmedia.com/clients/index.php/announcements/229 Mon, 15 Jul 2013 16:35:00 +0000 istgt requires a restart everytime LUNs are changed, which is very cruel.

Also any connectivity issue very easily causes client node to remount as read only, for unknown reason.

Therefore, we are possibly changing the iSCSI daemon to another one within the next 48hrs which was noted to handle errors much more gracefully in tests done earlier.

istgt was chosen because it was the daemon of choice in some enterprise solutions and we already had some experience with istgt.

Things are still a bit work in progress, and i'm sorry for the issues, this is quite a bit of research and development as this kind of usage case isn't just happening elsewhere the way we are doing things.

 

]]>
<![CDATA[Storage - another bottleneck]]> https://pulsedmedia.com/clients/index.php/announcements/228 https://pulsedmedia.com/clients/index.php/announcements/228 Sun, 14 Jul 2013 05:13:00 +0000 Today after the maintenance we've noticed bad client side performance - unlike before when performance was fine and good.

The file system itself seems to be going really fast right now, we are seeing 1.6G/s (YES, Gigabytes per second) writes and reads, with random reads still above 540M etc. with iozone -a -s 16g -r 4096 -T test.

This needs more diagnosis, but by the looks of it the culprit now is the iscsi daemon istgt.

There's been reliability issues as well for the past day, one volume got corrupted and several machines required reboots to regain access to storage.

]]>
<![CDATA[Storage maintenace over]]> https://pulsedmedia.com/clients/index.php/announcements/227 https://pulsedmedia.com/clients/index.php/announcements/227 Sat, 13 Jul 2013 17:27:00 +0000 Storage maintenance took much longer than expected, but the configuration error has been fixed now and performance should be increased very significantly.

All vnodes are back online, and we are prepping to bring online a lot more during this weekend.

]]>
<![CDATA[Storage maintenance today]]> https://pulsedmedia.com/clients/index.php/announcements/226 https://pulsedmedia.com/clients/index.php/announcements/226 Fri, 12 Jul 2013 13:39:00 +0000 We need to do some storage system maintenance to gain even more performance out of it.

This will happen today, approximately from 18:00 GMT, and will take couple hours.
We will try to keep client nodes up & running during this period but we cannot guarantee that. Downtime is likely to occur.

]]>
<![CDATA[Production update]]> https://pulsedmedia.com/clients/index.php/announcements/225 https://pulsedmedia.com/clients/index.php/announcements/225 Fri, 12 Jul 2013 12:24:00 +0000 We have progressed slightly in taking new servers into production.

There is now also first dedicated server delivered, and some servers for seedboxes for a total of 5 servers in production now.

We are already seeing peaks of about 300Mbps and 30+ kpps on network side, this was with just 3x100Mbps servers on production. Our gateway is performing admirably, at those rates CPU idle sitting nicely at very near 100% or at 100% :) Wonder what happens when dynamic routing is enabled, but even then this gateway should be capable of sustaining 20Gbps+

Storage server there was a suprise, CPU was maxing out, and this is not a slow CPU either, but it was quickly figured out why and fixed.

Currently there is 16.5T allocated from the storage. New storage nodes + drives are already in order.

]]>
<![CDATA[Production begun]]> https://pulsedmedia.com/clients/index.php/announcements/224 https://pulsedmedia.com/clients/index.php/announcements/224 Thu, 11 Jul 2013 02:42:00 +0000 Today template got pretty much finished and we are now operating on 3 nodes already on full production and 1 to be setup on production.

Tomorrow several more nodes will be setup, and we are expecting a shipment of several nodes this week.

Hopefully next week we reach 20+ servers online, with a lot of the preorders already delivered.

 

]]>
<![CDATA[nearing production on own hardware]]> https://pulsedmedia.com/clients/index.php/announcements/223 https://pulsedmedia.com/clients/index.php/announcements/223 Wed, 10 Jul 2013 03:12:00 +0000 We finally achieved bootable and usable system booted off iSCSI, and the base Debian 7 template is now done.

Initial production system testing on the SAN:
Core2Duo
Intel Gbit adapter
Single connection, Single Session
No jumbo frames

108M/s throughput on first attempt and 3200IOPS.

Comparison: WD Black using the same tests achieved only 160IOPS.

]]>
<![CDATA[delays in provisioning]]> https://pulsedmedia.com/clients/index.php/announcements/222 https://pulsedmedia.com/clients/index.php/announcements/222 Tue, 09 Jul 2013 01:16:00 +0000 New service provisionings are lagging behind due to the network and hardware build out. It makes no sense to acquire from the old locations more servers just to be migrated out shortly.

We are days away from production.

]]>
<![CDATA[Storage]]> https://pulsedmedia.com/clients/index.php/announcements/221 https://pulsedmedia.com/clients/index.php/announcements/221 Fri, 05 Jul 2013 02:44:00 +0000 Redundancy and performance is extra important on storage, so today we were at the DC brainstorming this.

We ended up on decision to build a big cluster of small machines, and keep on upgrading those on a as needed basis. This will give much better redundancy as things are relying on many more individual machines and scale out is easier on our budget.

This doesn't mean any changes on the current offerings, just how they are going to be provisioned in a month from now. After this initial expenditure this allows us to put much more budget upfront on the nodes and then weekly storage upgrade routines.

We are targeting a year goal of 120+ HDDs on the cluster with 40+ SSD caches, with steady throughput capacity of 40Gbps+, likely to be upgraded to quite fast to as high as 400Gbps, depends upon the needs.

Also, we are going to experiment sooner than expected with local SSD caching, and purchased a lot of mSATA SSD drives for testing purposes.

Our IP Block has been now assigned, and tomorrow people are working towards getting things online.

]]>
<![CDATA[Transit up tomorrow]]> https://pulsedmedia.com/clients/index.php/announcements/220 https://pulsedmedia.com/clients/index.php/announcements/220 Thu, 04 Jul 2013 15:36:00 +0000 Our first transit link should be up and operational tomorrow. Our IP Block got assigned already.

Netboot/iSCSI boot is still undone, but first storage brick is now stable and final configuration.

i3s require 19VDC power input, a oversight in design, it was assumed they have wide input as well, so these will be delayed until late next week to get up & running.

There is circa 15x2G, all sold already, and some 9x4G models, most already sold, and 3xi3 8-16G for the first nodes to be setup on these first days, more will be setup weekly, wednesday or fridays.

 

]]>
<![CDATA[Getting your own DC running: Lots of unexpected things]]> https://pulsedmedia.com/clients/index.php/announcements/219 https://pulsedmedia.com/clients/index.php/announcements/219 Wed, 03 Jul 2013 02:37:00 +0000 When you are doing a build out like ours - so many new things, and taking certain things to extremes there is bound to be unexpected things happening.

After several days of issues with a storage brick we noticed the SATA Raid controller was not compatible with this specific SSD model and make. Then basic SATA power connectors failing one after another, a need for not just Y-cables from Molex to couple sata power connectors, we actually soldered our own tri-cables as no shop was open (past 9PM) and wanted to get it up & running ASAP.

Then we get to the fun part: Switches. Those mystical black boxes. We are starting to understand why people pay a premium for support contracts and keep a tight check that their model is still supported. A bunch of our switches are Brocade / Woven Systems 48xGbit + 4xInfiniband 10G, and whoa, has it been a job to get them funciton at any level!

After tedious search with very little results finding a supposedly correct manual it turns out half the example commands and half the documentation is completely wrong - but not totally off the base. Hours to get the switch even to respond to it's IP via management etc.

Just to notice that our HP Infiniband adapter is not compatible with this type of switch, the HPs are compatible with the Cisco Core switches (those beasts of a switches with 960Gbps fabric), but not with the Brocades. Fortunately, we had some Myricom cards which are not really even Infiniband - sure they utilize the same switches, the same fabric, but only support IPoIB, but these atleast work with the Brocades :)

Lots of things like this, even small pieces like rack nuts can give you a headache: We had a bunch of nuts which used smaller bolts, and huge pile of nuts which were way too loose to be used oO;

Personally, i've never used fiber links anywhere, yes, i've seen that stuff, but actually using -> a little bit of research and wondering What the ??? am i supposed to do with this connector :) Thing is, i like to learn this stuff, even tho we got people in our team who knows this stuff, but as "The Man"/"The Boss", i ought to know at least the basics :)

An all to the way of Brocade being very sensitive to cable quality, and not bringing up interfaceto 1Gbps if the cable in question is not top notch. Well I guess that's a good thing! :)

Plenty of progress, long nights at the colo room but things are moving, expecting that our transit link is up & running in a few days.

Br,
Aleksi

]]>
<![CDATA[Colocation: Cogent port assigned, PtP remains to be done, SAN almost complete]]> https://pulsedmedia.com/clients/index.php/announcements/218 https://pulsedmedia.com/clients/index.php/announcements/218 Mon, 01 Jul 2013 21:04:00 +0000 SAN is almost complete now, some configurations remain to be done, but physical layer is complete.

Cogent assigned today our 10Gbase-LR port and are waiting for demarcation information, after that, PtP connection between the buildings is done and we can setup the public network.

Further, we developed our own temperature & climate monitoring system based on Raspberry Pi in order to further lower costs. On this system we can add unlimited number of sensors for roughly 3€ each.

 

]]>
<![CDATA[Upgrading to PDS14]]> https://pulsedmedia.com/clients/index.php/announcements/217 https://pulsedmedia.com/clients/index.php/announcements/217 Mon, 01 Jul 2013 14:04:00 +0000 If you have older generation PDS, or other dedicated server you may upgrade to it, the process is as follows:

- order and pay the new server
- Once server is setup we will adjust your service due date accordingly
- You may now migrate over to the new one, or we can attempt a mirror copy
- Once migrated, you may opt for prorata refund or days transferred from the old server to the new one, cut off weekly wed-friday. Open a ticket at this point couple days early.

The cutoff is based on our payment cycle, we usually pay the servers weekly, and we will provide prorata according to that date.

 

]]>
<![CDATA[PDS14 / PDS-FI Available for order!]]> https://pulsedmedia.com/clients/index.php/announcements/216 https://pulsedmedia.com/clients/index.php/announcements/216 Fri, 28 Jun 2013 20:45:00 +0000 They are finally here!

The final specification of our own dedicated servers, with a lot of hardware already acquired and ready to be put online as soon as the network infrastructure is finished!

We estimated first servers delivered before mid-july.

Pricing is also very competitive, setup fees are also waived on annual or longer orders!

See them at: http://pulsedmedia.com/personal-dedicated-servers-finland.php

]]>
<![CDATA[Super100 upgrade on our own hardware]]> https://pulsedmedia.com/clients/index.php/announcements/215 https://pulsedmedia.com/clients/index.php/announcements/215 Mon, 24 Jun 2013 18:14:00 +0000 It's been decided that Super100 will be upgraded to 1Gbps downlink when we move to our own hardware.

The hardware will be "standard specification" Core i3/Xeon E3, 16G Ram. We will put more users per node, but provide vastly more disk I/O resources, we are thinking about 15 to 25 users depending upon the performance. As traffic limits are also introduced at this time, we believe there will be no performance issues. If there is, we will upgrade the SAN connection, SAN setup or do local cache disks.

]]>
<![CDATA[Cogent link ETA]]> https://pulsedmedia.com/clients/index.php/announcements/214 https://pulsedmedia.com/clients/index.php/announcements/214 Thu, 20 Jun 2013 14:22:00 +0000 Target date for setup is 28th of June, but it's not certain. Might take couple weeks from that before things are operational.

After that is operational we will be building nodes up as fast as we can, and new orders for Super100 series will be setup on our own machines.

First migrations will be 2009+ Starter as for that we can utilize 2nd hand hardware. Next up will be Super100, and likely these migrations will go in parallel.

We are doing Super100 in batches of 3-4 servers at a time.

]]>
<![CDATA[Colocation agreement signed]]> https://pulsedmedia.com/clients/index.php/announcements/213 https://pulsedmedia.com/clients/index.php/announcements/213 Wed, 19 Jun 2013 21:58:00 +0000 Colocation agreement has been signed and management network setup.

Tomorrow we will finish setting up the necessary access cards, and after mid summer festivities we will start to rack some servers.

Cogent link is expected to be up by 1st of July if timetables hold, there is a 3rd party which provides the transport from the teleroom we connect to, to the equipment Cogent has which will ultimately make the schedule.

 

]]>
<![CDATA[Colocation progress update]]> https://pulsedmedia.com/clients/index.php/announcements/212 https://pulsedmedia.com/clients/index.php/announcements/212 Wed, 19 Jun 2013 00:34:00 +0000 We should be getting the keys, contracts etc. done within a couple of days.

A lot of hardware is arriving from a lot of different places, couple storage servers has been built so far, and misc servers are lying around.

Decided to do a nice upgrade to the SAN plans, we acquired several Cisco switches to act as SAN core, each switch has a unblocking 960Gbps fabric. We will have couple of these in production and a spare.

Also, for the SAN side we are going to use way over specced cabling, Cat5e is able to do 1Gbps, but we decided to go for Cat6a and Cat7 S/FTP, which both are way over spec, Cat7 being primarily for 10GbE. This will ensure the best speeds possible, as there will be a lot of cables bunched together and it's important to eliminate as many as possible sources for interference. The shielded cabling will also have way lower crosstalk etc. Cat5e is barely able to do 1Gbps - That's not something you want in your SAN.

The next step in storage servers will feature 60 to 80Tb raw storage in each of them, before finally stepping up to the 135 to 180Tb models.

It'll be after midsummer festivities before we can rack them etc. and we might have production capabilities by start of July.

]]>
<![CDATA[Good news about colocation.]]> https://pulsedmedia.com/clients/index.php/announcements/211 https://pulsedmedia.com/clients/index.php/announcements/211 Fri, 14 Jun 2013 11:22:00 +0000 Good news!

We are getting keys to the place finally next week and are going to start setting things up.

Routing and networking setup has to be done prior to 10.7 because the network expert is about to leave for vacation then, so we are hurrying up our schedule somewhat.

 

]]>
<![CDATA[Super100 delivery estimate]]> https://pulsedmedia.com/clients/index.php/announcements/210 https://pulsedmedia.com/clients/index.php/announcements/210 Mon, 10 Jun 2013 14:27:00 +0000 We've had quite a few orders again on this series, and once again there is server delivery delays.

A few is being setup right now, but that doesn't cover the full backlog yet.

We estimate thursday for all current orders fullfilled, but this is dependant upon the DC getting new servers set up.

From now on, we will try to keep few servers at hand ready for fullfillment of new orders to ensure DC delivery delays does not hinder the provisioning.

This all despite our contract with the DC has a delivery within 24hrs clause, but we do understand their situation.

 

 

]]>
<![CDATA[Initial storage server results]]> https://pulsedmedia.com/clients/index.php/announcements/209 https://pulsedmedia.com/clients/index.php/announcements/209 Sun, 09 Jun 2013 17:29:00 +0000 Very initial results are in.

Storage server:

FX-6100 AMD CPU (6 core)
16G DDR3 ECC Kingston
4x3Tb Seagate Barracuda 7200RPM (Integrated SATA3 controller)
1xSamsung 840 PRO (Highpoint SATA3 RAID Controller)
Integrated 1Gbps network connection

Via 2xHome 1Gbe switches (cheap)
Using subspecification network cabling (poor condition)

Testing server:
Intel QX9770
8Gb DDR2
1Gbe integrated network
Background load level 10% CPU, 64% RAM, 3% network

Unrar performance: roughly 60M/s
I/O test operated with 50% of the time a background unrar going on the file server.

Despite adding up this many factors to negate performance and cause issues first results were promising:
4workers, 32kb 84% random, 85% read, 95% of workload + 256Kb 50% Read 50% Write 95% Sequential 5% of workload.
Which resembles 4xrTorrent instances + 4xUnrar instances, loads 95% rTorrent 5% unrarring.

1620IOPS (8 magnetic drives worth)
52.19M/s  (More than double the monthly average of 100Mbps servers)
Average response time: 8.9ms  (Less than to be expected from magnetic drives in real world)
Maximum response time: 337.5ms  (Most likely caused by the suboptimal network)

Runtime: 120 seconds, ramp up 10secs

More results to follow.

]]>
<![CDATA[Super100 delivery expectations]]> https://pulsedmedia.com/clients/index.php/announcements/208 https://pulsedmedia.com/clients/index.php/announcements/208 Fri, 31 May 2013 20:25:00 +0000 We are expecting a server or two still today, and more during the weekend, backlog should be cleared by sunday night.

 

]]>
<![CDATA[Changes on the upcoming dedis plan]]> https://pulsedmedia.com/clients/index.php/announcements/207 https://pulsedmedia.com/clients/index.php/announcements/207 Fri, 31 May 2013 00:05:00 +0000 We have decided it makes absolutely no sense to invest into last generation or older hardware, despite their cheap buy in cost. Namely because they offer both lower performance and higher electricity consumption, along with very short lifespan.

Therefore the upcoming 2G plan will be very restricted supply and not part of the main offerings.

Also, we intend to invest into the best of the current generation for the level of hardware, this means latest Ivy Bridge CPUs, only largest low voltage memory modules which can be utilized, only 80Plus Platinum PSUs etc etc. The buy in might be steeper but the savings in electricity and prolonged life span should be far worth it.

Further, we have decided to delay the launch of dedis a little bit back in order to ensure seedboxes, internal server etc. are migrated first, but the delay we are talking about is only a few weeks, basicly it's about prioritizing the workload we are under right now. Not once in Pulsed Media's history has there been such a high work load for such a prolonged period of time, but by the fall things should start settling down and getting back more to a routine.

We will also rework the pricing with updated electrical consumption figures from testing to be done during June and with the updated lifespan expectations.

Pre-orders will be opened within several weeks, if not earlier. More details of pre-order will be released at that time, along with updated schedule estimates.

 

]]>
<![CDATA[Super100 delivery delays]]> https://pulsedmedia.com/clients/index.php/announcements/206 https://pulsedmedia.com/clients/index.php/announcements/206 Thu, 30 May 2013 19:35:00 +0000 Sorry, we have not been able to setup new Super100 offers in couple of days as we are waiting for new nodes to arrive.

Usually new nodes will be ready within 24hrs, but the DC ran out of nodes so we must wait for new ones.

I know you guys are anxiously waiting, and we are hoping that backlog will be cleared by end of this week.

]]>
<![CDATA[Colocation storage server testing begun]]> https://pulsedmedia.com/clients/index.php/announcements/205 https://pulsedmedia.com/clients/index.php/announcements/205 Tue, 28 May 2013 12:17:00 +0000 After more than a month in waiting, finally the last of testing gear arrived. 3 big boxes of precious hardware.

The two testing nodes are minimally configured. AMD 6core 3.3Ghz CPU, 8Gb DDR3 ECC, 5x3Tb Cuda, 1x256Gb SSD Cache, 2x32Gb SSD Boot drives, Highpoint raid controller for cache drives, basic RAID controller for the boot drives.

Final configuration will be 32G DDR3 ECC, upto 14 magnetic drives, 4 cache drives. The two configurations we intend to have in launch are: 14x3Tb + 4x256G and 14x4Tb + 4x256G. Both will operate in Parity 3 mode, allowing 3 drive failures before data is in danger. By my maths, during first 6months there will be 1 disk swap in either of the nodes, and during 3 years designed lifespan there will be 2-3 total failed drives. If failure rate exceeds 4 drives over the two nodes during the 3 years, then cooling and anti-vibration needs to be enhanced.

Now we are testing with total of 10 drives, both nodes running parity 1. Each drive are mounted with rubber insulation, and the drive caddy is mounted with rubber insulators for anti-vibration. Then we put all the drives next to each other to maximize the heat build up on the drives, to drive them to the edge during testing.

At our colocation room the storage servers will be next to the cold air vent, getting the best cooling. Target temperature at the hottest spot @ ~1.5meter height in the colocation room will be 30C. So yes, we will be running the hardware hot, but as testing by google has shown this does not necessarily translate negatively into defection rates. If it does, we will drop the temperature 1C at a time until defection rates are at acceptable level. In the end, we have built in redundancy by design, and it boils down to financial maths, is it better to let 1 extra unit fail per 6months, or is it better to lower temperature and pay X amount extra in cooling.

This is why we test things, for one we need to get extra short lockable SATA cables, better SATA power cables, the ones used in Corsair PSU take up too much space with SSD drives, and modify the cases to accomodate extra 120mm fan or two. Maybe some additional 80mm fans too. Also, someone (me) forgot to purchase low profile video cards as well!

Thankfully my apartment has a spare room to fiddle with these before we get the colocation room, it takes up surprisingly much space when assembling these!

Some pictures: http://imgur.com/xlmRlpO,kn15CWr,BZRXtN3

Now the testing may begin, things we are testing are heat build up, PSU efficiency, performance (ofc), reliability. Our intention is to push these to the limits, and test all kinds of malfunction scenarios. Just building up these nodes showed couple drawbacks in the design, for example the last 5 hard drives with standard size ATX motherboard are ridiculously hard to get into their slot or out, so we might need to drop the storage server size down to just 8 drives.

No infiniband tests yet.

- Aleksi

]]>
<![CDATA[PDS-2G and PDS-m2G]]> https://pulsedmedia.com/clients/index.php/announcements/204 https://pulsedmedia.com/clients/index.php/announcements/204 Thu, 23 May 2013 10:52:00 +0000 We restricted the sales of these 2 machines as so many of newly ordered machines were being used as a Seedbox, despite this model not being suitable for this usage.

If you still want one, and can assure us it's not to be used as a seedbox, please open a sales ticket.

]]>
<![CDATA[Reliability benefits of our own hardware, potential additional features]]> https://pulsedmedia.com/clients/index.php/announcements/203 https://pulsedmedia.com/clients/index.php/announcements/203 Wed, 22 May 2013 11:11:00 +0000 Reliability benefits of our own hardware

With our own hardware we can provide things which were not previously possible.

Reliability will be vastly increased, some will be realized immediately upon transfer, some will be realized over time.

The instant benefits are:

  • Data redundancy: 99% of data loss cases should be eradicated, all servers will have redundant storage
  • HW redundancy: We can simply boot you onto another HW node if available, hastening recovery from CPU/RAM/Mobo/PSU failures.
Over time we can potentially also add Dedicated server upgrades/downgrades, Dedicated server quick storage upgrades and purely a backup service, and full mirroring data backup feature to off-site location.

The last one is tricky and might take more than a year to realize, as the off-site location needs to be connected as well with at least 10Gbit, even with 1Gbit the vast amounts of data we need to transfer would take way too long to be relevant. 1Gbit is able to transfer just 300-310Tb a month, so it would likely take more than a month for the initial sync for the first servers, never mind when we reach full production capability...

We will likely also proceed to offer storage orientated services, such as backup service as a VPS for professional usage, likely starting from just 10€ a month with 1Tb storage. 1/10th the price of Rsync.net.
In this service there would be no restrictions how many computers, what kind of data etc. unlike with the "unlimited" ones which limit you to 1 computer, and excludes large, esp. incompressible data files such as VMWare images, large video files etc.

There really is no service like this currently in offering, and we would have gladly used something like that in the past to provide data redundancy at a sensible price range. Of course scaling up to 30Tb, later on upto petabytes.


]]>
<![CDATA[Delay on our own network and hardware]]> https://pulsedmedia.com/clients/index.php/announcements/202 https://pulsedmedia.com/clients/index.php/announcements/202 Wed, 15 May 2013 08:58:00 +0000 Colocation room provider informed us that the old tenant needs at least couple weeks more time to move out, more likely a month.

This will delay the launch until July.

This gave us a sigh of relief as the schedule was so extremely tight previously that not all hardware would have arrived in time. The 10G switches are expected to arrive first or second week of June for example, and remainder of the storage server drives and components are not expected before mid june.

]]>
<![CDATA[PMSS Update: Autodl-irssi required packages]]> https://pulsedmedia.com/clients/index.php/announcements/201 https://pulsedmedia.com/clients/index.php/announcements/201 Sat, 11 May 2013 13:38:00 +0000 We have added to PMSS autodl-irssi required packages.

We are slowly rolling out this release. You may now manually install autodl-irssi if your server has /etc/autodl.cfg file.

]]>
<![CDATA[Autodl-irssi]]> https://pulsedmedia.com/clients/index.php/announcements/200 https://pulsedmedia.com/clients/index.php/announcements/200 Thu, 09 May 2013 11:00:00 +0000 Autodl-irssi

Has been tested to work on PMSS with minor modifications.

Namely some packages need to be installed as root, and depending upon the usage /etc/autodl.cfg created. Nothing fancy is really required.

Autodl-irssi will be added on the system when traffic limits are in effect, and only for series with traffic limits. The reasoning is very simple: Autodl-irssi will quickly lead to massively higher loads due to constantly adding torrents, especially when configured too greedy. On unlimited traffic seedbox this would quickly lead to a situation where there simply is no BW for those not using autodl-irssi, and the overall experience is much worse for everyone.

We will likely make an "enable" button for autodl-irssi, and then provide a quick tutorial how to use it, and how to use it for normal ircing as well. We might even go as far as add web based terminal to connect to it.

 

]]>
<![CDATA[Building network: So many tasks]]> https://pulsedmedia.com/clients/index.php/announcements/199 https://pulsedmedia.com/clients/index.php/announcements/199 Wed, 08 May 2013 11:08:00 +0000 Building network: So many tasks

It feels like insurmountable task with so many different things one needs to take care of. Many would think "Oh, just throw in a router and do the BGP", but infact, there is a plethora of other things you need to account for.

DNS Servers: These need to be properly configured recursive servers, 2 are required. Has to be secure, hardened, and contain filtering for possible DNS attacks (poisoning, amplifaction etc.)

DHCP/PXE: In our case also a DHCP and PXE server is needed, with dynamic on-the-fly idempotent configuration.

Monitoring, web: Web server for displaying monitoring data, out-of-band, discreet connection via separate physical links and on separately routed network. This same server can serve multiple purposes with multi-ip setup, such as inbound VPN etc. However, for security we will propably have all tasks separately.

Monitoring, calculation: A server which does the actual calculations and drawing the graphs for monitoring, which are then synced for the displaying one. Due to the sheer volume "simple" calculation will take a lot of effort, and in this manner there can be multiple nodes to just calculate the graphs to keep up. Munin will not work here neither, and there needs to be things ensuring data corruption does not happen (Munin is extra happy to corrupt your data at high volume)

Management: There needs to be a management server which is primarily connected out-of-band.

These are just some servers to configure, of course there are other concerns as well as the electrical infrastructure, UPS, racks itself, building and testing the servers etc. Never mind all configurations need to be thoroughly tested before taken into production.

Business wise these are annoying tasks, as traditionally none of these tasks are actually marketable, these are so called "hidden" features people normally does not care about, unless they break down, but it's the marketable things which brings in the business to pay for all of this.

]]>
<![CDATA[New Super100 Seedboxes!]]> https://pulsedmedia.com/clients/index.php/announcements/198 https://pulsedmedia.com/clients/index.php/announcements/198 Sun, 05 May 2013 12:48:00 +0000 740Gb for as low as 11.99€ per month and 1480Gb for as low as 21.99€ per month!
Initially only 400 Super100 signups, and 200 Super100+ signups are to be accepted. Grab yours while you can! :)

 

See all the juicy details at http://pulsedmedia.com/super100-seedbox.php

]]>
<![CDATA[New dedicated server additional options]]> https://pulsedmedia.com/clients/index.php/announcements/197 https://pulsedmedia.com/clients/index.php/announcements/197 Thu, 02 May 2013 08:33:00 +0000 New dedicated server additional options

One of the so far unanswered question is upgrades for the dedicated servers. Answer is yes and no.

We need to have the HW configuration as identical as possible, so that limits the upgrades and additional options.

For the 16G model you can upgrade to 1Gbps at 50-60€, which will likely require a reboot to another hardware node.

You can also upgrade storage, at this moment not sure can we expand the base storage, or does it need to be delivered as another virtual device. It's likely we can create a system where the existing file system can be expanded, but might result in some downtime as the data is migrated.

Additional storage upgrade likely prices are going to be:
1Tb: 12€
2Tb: 20€
4Tb: 30€
8Tb: 50€

At that pricing you could have 10T servers with as low as 75€ per month! Making it absolutely the cheapest high storage solution in the market. With the 16G it will make 16T at 105€ a month, or 1Gbps 16T at 155€ a month!

You can also migrate directly to better hardware at your will, say you got the 4G model and want to upgrade to 16G model, that will result in a reboot and at worst case a little bit of downtime as the storage is expanded. Of course depending upon do we have hardware nodes available at that point how fast the migration happens.

Initially we will have very limited quantities of servers available as we already have reseller requests for more than 300 servers.

]]>
<![CDATA[Preliminary new seedbox offerings]]> https://pulsedmedia.com/clients/index.php/announcements/196 https://pulsedmedia.com/clients/index.php/announcements/196 Wed, 01 May 2013 09:47:00 +0000 In the last announcement we spoke of probable server configurations at http://pulsedmedia.com/clients/announcements.php?id=195

We have been giving thought also to new seedbox offerings, and these are promising. With introduction of strict traffic limits on torrents (FTP, SFTP, Shell not counted) we can do something quite special. The base offer will likely be hosted on a Core i3, 16-32G Ram.

The entry level plan could be:

750Gb Storage (non-burstable)
3Tb traffic limit (4xStorage)
1Gbps Up + 1Gbps Down
Fully redundant storage, optional backup features
9,99€ per month.

Storage upgrades priced at: 750Gb 5€, 2Tb 10€, 4Tb 15€
Traffic upgrades: 3Tb 9€, 6Tb 15€, 15Tb 30€, 30Tb 50€

These are just preliminary maths, and we reserve the right to change these at any time.

Initial offering will be monthly with setup fee, and longer cycles without setup fee. Initial offering will also be limited to 300 accounts.
With the upgrade options it becomes flexible as of what you need most.

Shared server seedboxes will be load balanced quite efficiently. We want to develop a transparent migration system from server to server to make sure performance stays top notch for all users.

Semi-dedicated offers (4 users max per server) are not yet verified, but likely hosted on Atom N2800, 4G Ram, with following specifications:
1Tb Storage
5Tb Traffic limit*
100Mbps Up + Down
Fully redundant storage, optional backup features
9,99€ per month

Actual traffic may vary depending on the level of congestion on that particular volume network switch on the semi-dedi offers (based on upgraded PDS14-4G).

]]>
<![CDATA[Likely dedicated server configurations, reseller info]]> https://pulsedmedia.com/clients/index.php/announcements/195 https://pulsedmedia.com/clients/index.php/announcements/195 Wed, 01 May 2013 08:01:00 +0000 Dedicated server configurations

We will have likely have configurations like these, or very close to these, with the following prices.
All of these servers will come with unlimited* traffic. Please note that when we offer 1T of storage it's actually 1024G not ~917G.

PDS14-2G
Atom D410
2G Ram
2T Storage
100Mbps Bandwidth
25€ per month

PDS14-4G
Atom N2800
4G DDR3 Ram
3T Storage
30-64Gb local ssd cache
100Mbps Bandwidth
35€ per month

PDS14-8G
Core i3
8G DDR3 Ram
4T Storage
30-128Gb local ssd cache
100Mbps bandwidth
45€ per month

PDS14-16G
Core i3
16G DDR3 Ram
6T Storage
60-128Gb local ssd cache
100Mbps
55€ per month
1Gbps upgrade +50-60€ per month

 

It is likely we will have the 2G, 4G and 16G models from start. We intend to keep as many units as possible on certain types to drive efficiencies of scale into play.

The local ssd cache we'd like to do but might not be on initial release, and rather added later. There is security concerns in cleaning the data from ssd cache drives and thus their endurance. We need to develop a secure erase system which does not unnecessarily wear the drives.

Bandwidth has been set so that the fair amount per server is 30Mbps+ at 100Mbps, and at 1Gbps 150Mbps+ at minimum. We will maintain at least this amount of bandwidth capability per server on datacenter scale.

It is also possible that monthly and quarterly signups will entail a setup fee initially.

Scalability?

Initially we will have room for roughly 1000 servers, with an IP block of /21 (2048 IP addresses). We can rather easily acquire extra space for upto 5000 servers at this location.
We will initially have only 1 fiber pair connected, installations of new fibers will take several weeks or upto a month.
We are acquiring an AS# so we can rather easily add peering and multi-homing during 2013/2014 winter.
We intend to acquire connectivity separately clearly intended for nordic and eastern (russia, asia) connectivity, this is already in the plans.

IP addresses will be the biggest limiting factor, if nothing else, we have the potential to use PA blocks from our upstream providers.

Scalability storage wise is infact constantly more and more efficient, as we reach more scale we can drive even more performance, even more reliability and even higher cost efficiency through the system.

Launch?

We are expecting to be online by 15th of June. Initially we will bring in our webservers, our own seedbox servers etc. and limited scale testing dedicated servers. We are hoping that by 1st of August we can start delivering dedicated servers at a scale, and hoping to fill this room by Spring, 2014.

 

Resellers

We will also do a clearly defined reseller guidelines at this time.
These are what we are considering right now, and will closely resemble what we intend to do:

Bronze level
5+ servers
300€ in monthly fees
5% discount

Silver level
20+ servers
1000€ in monthly fees
10% discount
Free SMS server monitoring

Gold level
50+ servers
2000€ in monthly fees
15% discount
Free SMS server monitoring

Platinum level
150+ servers
4000€ in monthly fees
20% discount
Free SMS server monitoring
Direct phone line concierge support
Skype personal support
Reseller personalized statistics monitoring page

]]>
<![CDATA[Migration data transfer status]]> https://pulsedmedia.com/clients/index.php/announcements/194 https://pulsedmedia.com/clients/index.php/announcements/194 Fri, 26 Apr 2013 09:43:00 +0000 The data transfers has to be reinitiated at least once an hour, and that is being optimistic.

Sometimes the rsync only runs for a few seconds before DC firewall drops the connection.

DC has not done anything to this so far.

We have now stopped new migrations, and those we are still attempting to transfer data might be cancelled and back to old servers. This situation is completely unacceptable by the new DC.

This isn't the only issue we are having with this DC, there is a myriad of billing and management issues as well. We are hoping that when our own DC is completed migrations out will be fast and effortless.

]]>
<![CDATA[Migration data transfer cut off reason]]> https://pulsedmedia.com/clients/index.php/announcements/193 https://pulsedmedia.com/clients/index.php/announcements/193 Thu, 25 Apr 2013 14:32:00 +0000 The cut off reason according to DC is DoS inbound.

Simply several rsyncs are causing their over zealous anti-dos measures to cut the connections. According to them. According to the DC inbound data rates were much higher than could be.

This is their explanation, and we are baffled how could 100mbps server send that much faster, and especially over TCP connection.

]]>
<![CDATA[Migration data transfer issues]]> https://pulsedmedia.com/clients/index.php/announcements/192 https://pulsedmedia.com/clients/index.php/announcements/192 Thu, 25 Apr 2013 10:02:00 +0000 It has come to our attention that it is more of a norm rather than exception to data transfers to halt from the old server to new server.

This is completely new kind of issue, and it's most likely the cause of the new DC.

We are working on this, and new migrations have been halted until this can be solved. It does not always happen, but lately there is now 3 different servers from which data migrations are near impossible.

If you have been migrated, and still not seeing all your data, please open a ticket.

]]>
<![CDATA[Price reduction in Value series]]> https://pulsedmedia.com/clients/index.php/announcements/191 https://pulsedmedia.com/clients/index.php/announcements/191 Mon, 15 Apr 2013 10:01:00 +0000 Price reduction in Value series

We have reduced the price for long term signups on the Value series significantly. You can view the new pricing at http://pulsedmedia.com/value-rtorrent-seedbox-100mbps.php
All monthly rates remained the same. This price change reflects the acquired lower pricing with upfront investments.

All pricing is grandfathered and this is in effect only for new signups.

]]>
<![CDATA[Instant direct Bitcoin payments now accepted]]> https://pulsedmedia.com/clients/index.php/announcements/190 https://pulsedmedia.com/clients/index.php/announcements/190 Tue, 09 Apr 2013 21:59:00 +0000 Instant direct Bitcoin payments now accepted

We have accepted bitcoins for years, initially manually then via OKPAY, but now aswell via Bitpay for faster and more efficient service.

You will see a Bitcoin payment option on invoice views and checkouts.

]]>
<![CDATA[OVH + PMSS Issues]]> https://pulsedmedia.com/clients/index.php/announcements/189 https://pulsedmedia.com/clients/index.php/announcements/189 Tue, 09 Apr 2013 11:28:00 +0000 OVH + PMSS Issues

OVH has decided to add Apache2 into the Debian 6 template in a manner that it cannot be cleanly removed. Init files, binaries, configs everything will remain even after removing the debian package.

So in order to install PMSS you need to first:

/etc/init.d/apache2 stop
killall -9 apache2
rm -rf /etc/init.d/apache2
rm -rf /etc/apache2
.
.
.

 

To clean apache2 out of the way. Otherwise the apt line with Lighttpd will fail, and subsequently rTorrent compilation.

]]>
<![CDATA[OVH Network speed caps and throttling information]]> https://pulsedmedia.com/clients/index.php/announcements/188 https://pulsedmedia.com/clients/index.php/announcements/188 Sun, 07 Apr 2013 22:49:00 +0000 OVH Network speed caps and throttling information

There has been widespread discussion about speed capping, throttling and shaping in the OVH network as of late, they silently introduced new rules and regulations, which basicly says 'great bandwidth, unless you use it'. They have specifically mentioned that seedboxes are free from all bandwidth guarantees etc.

It looks like some are already being hit with this. However, this does not affect Pulsed Media and is not expected to affect. Value series doesn't have a single server in OVH DCs for example.

Never the less, we are putting haste in to our plans to make certain there will be no OVH seedbox servers by end of the year remaining. This process has already started months ago, and we will keep going on with our long term plans.

]]>
<![CDATA[PMSS Now with ALL dedicated servers free of charge!]]> https://pulsedmedia.com/clients/index.php/announcements/187 https://pulsedmedia.com/clients/index.php/announcements/187 Sat, 06 Apr 2013 16:12:00 +0000 PMSS With ALL Dedicated servers free of charge!

Now you can opt for PMSS to be setup on your dedicated server FREE OF CHARGE!

This includes full torrenting system supporting multi-user, and even pulsedBox is included!

Read more about PMSS at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack
Ready for a server? Check out PDS series at http://pulsedmedia.com/personal-dedicated-servers.php or 1Gbps lineup at http://pulsedmedia.com/1gbps-dedicated-servers.php

]]>
<![CDATA[Several hour downtime due to transformer failure and cascade software issue]]> https://pulsedmedia.com/clients/index.php/announcements/186 https://pulsedmedia.com/clients/index.php/announcements/186 Thu, 07 Mar 2013 00:42:00 +0000 We had couple hours of downtime today due to transformer failure which was hard to replace. The whole server room was dark for sevel hours. Further DNS went down due to a cascading issue in the DNS cluster + DNS GLUE records.

Everything has been solved now, and should function as expected. DNS Flush for some might be necessary.

]]>
<![CDATA[VAT Increase implemented - EU customers price change]]> https://pulsedmedia.com/clients/index.php/announcements/185 https://pulsedmedia.com/clients/index.php/announcements/185 Mon, 31 Dec 2012 11:28:00 +0000 VAT Increase to 24% has been implemented now.

EU customers, those who have to pay VAT got their service price lowered just enough that with the new VAT price remains the same, no change.

In future orders however the price will not be discounted.

I'm really sorry about this move by Finnish Goverment to raise the VAT price, but ultimately, we cannot do anything about this.

]]>
<![CDATA[Migration from 2011 servers complete]]> https://pulsedmedia.com/clients/index.php/announcements/184 https://pulsedmedia.com/clients/index.php/announcements/184 Fri, 28 Dec 2012 11:06:00 +0000 Migration from 2011 servers complete

I'm happy to announce that migration out of 2011 servers is pretty much complete now :)

Last few data transfers are going on, but otherwise completed. It was heck of a project to do all this, and many thanks to the users who made their pick upfront and thus easing our job.

It got costly to us - more servers and more in cost, while almost all users moved to cheaper plans, but such is the life of a provider. We are just happy the troubles are now over and we can concentrate again making our offerings better for our users.

]]>
<![CDATA[2012 XLarge available again]]> https://pulsedmedia.com/clients/index.php/announcements/183 https://pulsedmedia.com/clients/index.php/announcements/183 Wed, 26 Dec 2012 10:23:00 +0000 The 2012 1Gbps seedbox series dedicated hard drive option is again available!

For a rather low fee you can get a dedicated drive seedbox with 1Gbps connectivity!

See all 2012 offerings at: http://pulsedmedia.com/1gbps-seedbox-2012.php

]]>
<![CDATA[2011 is hard to keep online - Please upgrade]]> https://pulsedmedia.com/clients/index.php/announcements/182 https://pulsedmedia.com/clients/index.php/announcements/182 Sat, 15 Dec 2012 07:30:00 +0000 2011 series is hard to keep online

The providers on which 2011 servers are is extremely hard to keep online, because providers with 4 disk 1Gbps 100Tb/Unlimited server offers tend to extremely hard to work with, whether from NL, UK, Germany etc.

This time some servers are offline due to couple DMCA claims and this provider is not allowing us to even check up on the server and demanding we file a counter-claim or terminate user(s) in question - in the past many of these DMCA requests has been bogus to start with.

This is an european company called Swiftway - we have been arm wrestling about which law takes precedence within the Europe: US or European law for the past year.

We've always handled all DMCA and copyright notices as per European and Finnish law as we are a Finnish operator and services provider. You can read about Finnish copyright laws from http://en.wikipedia.org/wiki/Lex_Karpela and http://www.finlex.fi/fi/laki/kaannokset/1961/en19610404.pdf or by googling "finnish copyright law".

Because Swiftway does not give us legal options to handle these requests, or at all we do have to drop them - Unfortunately this is the last provider with 2011 series type of servers, and replacement of the servers is very hard as typically only 2 disk systems are available from providers at the price range we are looking for. Some places do offer servers which could work as direct replacement but does fall very short on other areas (such as basic maintenance of servers) to provide the quality we seek for. With 2011 series we simply cannot lower the disk quantity, nor raise the cost per node.

Some of the trouble with the providers is that we require that we can use the service to the full extent and they are not faulty - it's odd to find that many providers are unable to make basic server installations to the spec with software raid, and other oddities which we find to be ultimately basics.

The cost limitations are another limitation to the choices of providers - which tends to favor those providers which compete with price - not service.

So if you are still on 2011 plan, we do urge you to choose an upgrade, up-to-date service. More information will be available within the week how long the 2011 series will be available etc. and i do hope we can amicably resolve the current issues to bring servers in question back online.

]]>
<![CDATA[VAT Increase on 1.1.2013]]> https://pulsedmedia.com/clients/index.php/announcements/181 https://pulsedmedia.com/clients/index.php/announcements/181 Wed, 12 Dec 2012 08:23:00 +0000 The Finnish goverment has decided to increase VAT by 1% on start of the year.

Payments made before 1.1.2013 will still go with old VAT rate, but VAT will increase at the change of the year.

Unfortunately there is nothing we can do about this tax increase :(

We will inform later how we intend to handle the VAT increase regarding to existing accounts and payment subscriptions.

]]>
<![CDATA[Today's downtime]]> https://pulsedmedia.com/clients/index.php/announcements/180 https://pulsedmedia.com/clients/index.php/announcements/180 Thu, 06 Dec 2012 15:16:00 +0000 There was some downtime on pulsedmedia.com today, and the dns servers. Also a bunch of customer servers were down for few hours.

This has been now fixed, it was a simple human clerical error compounded with some server crashes along with supplier management system maintenance preventing us from taking correct action. All sorted out now.

However, if you had an payment during this period (Paypal subscription) please check it got accounted for, if not please open a ticket.

We are really sorry for any inconvenience this may have caused.

]]>
<![CDATA[New dediseedboxes!]]> https://pulsedmedia.com/clients/index.php/announcements/179 https://pulsedmedia.com/clients/index.php/announcements/179 Sun, 04 Nov 2012 11:24:00 +0000 New dediseedboxes!

New range of dedicated seedboxes has been unveiled!
All servers are now 100Mbps Full-Duplex dedicated link and unlimited traffic!

All have dual drives, 7200RPM S-ATA. Optional one time setup fee for lower monthly rate.
Now always with full PMSS license, including pulsedBox and yet to be unveiled features.

Prices ranging from 47.99€ to 74.99€

Check the new servers out at http://pulsedmedia.com/managed-dediseedbox.php

]]>
<![CDATA[New Personal Dedicated Servers]]> https://pulsedmedia.com/clients/index.php/announcements/178 https://pulsedmedia.com/clients/index.php/announcements/178 Thu, 01 Nov 2012 08:13:00 +0000 New Personal Dedicated Servers

The new line of PDS offerings has been revealed! Check them out at: http://pulsedmedia.com/personal-dedicated-servers.php

Now starting from just 16.49€ per month, and first dual disk option for just 28.99€ per month!

]]>
<![CDATA[Big PMSS update coming - please prepare]]> https://pulsedmedia.com/clients/index.php/announcements/177 https://pulsedmedia.com/clients/index.php/announcements/177 Tue, 30 Oct 2012 18:41:00 +0000 Big PMSS Update is coming

Next PMSS update will do some major, long overdue refactorings in key parts.

If you have your own server utilizing PMSS there is several things you need to keep an eye out for:
Cron jobs are all together changed and this may result your cron jobs being overwritten during the update, so please setup your custom crons to /etc/cron.d directory.

If it is an OVH server with Debian 6 your kernel will get swapped to Squeeze repo kernel as not all features works with OVH kernel. You can change it back by editing /etc/default/grub and setting default to 0 and then executing update-grub. If repo kernel is present no changes to grub will be made.

Some major backend upgrades are present in terms of logging and ensuring quality of service.

NOTE: This update has not yet been released - this will happen soon. For Pulsed Media series a usual type of rolling update will be done.

As usual, you can see the full changelog at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

]]>
<![CDATA[PDS Series to be updated- last chance!]]> https://pulsedmedia.com/clients/index.php/announcements/176 https://pulsedmedia.com/clients/index.php/announcements/176 Mon, 29 Oct 2012 09:58:00 +0000 PDS Series being updated

PDS series will update this week, this is last chance to get higher CPU spec models 16G and 24G.

The new offerings will come live by end of this week and the higher end models will have a CPU downgrade, no 24G model will be available anymore. New 16G model will be cheaper, and with dual 1tb drives but with just a i3 CPU instead of i5. The new 16G model will be the best available.

On other hand, low end gets an addition with mini 2G featuring just a 500G drive, but at a lower price of 13.99€ per month using Celeron or ATOM CPU. Standard 2G will be an ATOM cpu always from now on and with 1Tb drive, same price as before.

These are changes in our supply chain and we do not have power over those decisions.

]]>
<![CDATA[Dediseedbox offerings]]> https://pulsedmedia.com/clients/index.php/announcements/175 https://pulsedmedia.com/clients/index.php/announcements/175 Thu, 25 Oct 2012 14:51:00 +0000 Dediseedbox offerings

We had to unfortunately stop sales of the current dediseedbox offerings.

New options will be introduced soon. We are still considering the range of options we want to offer on the renewed managed dedicated seedbox series. We have options to consider from 30€ a month all the way to more than 1000€ a month per server.

]]>
<![CDATA[NEW Value Seedbox Available!]]> https://pulsedmedia.com/clients/index.php/announcements/174 https://pulsedmedia.com/clients/index.php/announcements/174 Sat, 20 Oct 2012 15:19:00 +0000 NEW Value Seedbox Available!

A very high value seedbox has been made available! These use very strong AMD Opteron servers with a sweet 16Gb RAM allocation!

Extremely high disk quota makes this a very sweet deal!

For just 11.99€ Per month you can get 230Gb disk quota and unlimited traffic! But when you look at the high end, you can get 610Gb for just 20.99€ per month! Now how sweet is that?

Check the new offers out at: http://pulsedmedia.com/value-rtorrent-seedbox-100mbps.php

]]>
<![CDATA[Changes in offerings are coming]]> https://pulsedmedia.com/clients/index.php/announcements/173 https://pulsedmedia.com/clients/index.php/announcements/173 Wed, 17 Oct 2012 19:45:00 +0000 Changes are coming

A lot of chances will be happening in the next couple of months, with new suppliers and new datacenters, along with planned software upgrades. Our plan is to do a overall update in everything Pulsed Media, lean out the operations, make things better and more stable, thus able to offer ever greater services.

New basic seedbox plans are to be released during this month - exactly when we are not sure, still pending some final checks on new supplier and new hardware, and fine-tuning everything for the optimal BitTorrent experience.

After that we will concentrate hard on our 1Gbps offerings line, planned updates are to stabilize and lower operational costs, while providing better performance.

More about these will be in upcoming newsletter, with some more information.

]]>
<![CDATA[One of the reasons account sharing is not allowed]]> https://pulsedmedia.com/clients/index.php/announcements/172 https://pulsedmedia.com/clients/index.php/announcements/172 Fri, 05 Oct 2012 15:41:00 +0000 One of the reasons account sharing is not allowed

We do not allow account sharing, unfortunately, some people do not listen and still shares accounts. Due to that we have to now build monitoring for it, and a custom tailored malware scanner, along with global blacklist.

In the past weeks there's been several instances of malware destroying customer GUI, it's always specific user, and did occasionally reoccur after rebuilding user space. It was finally determined this is due to account sharing in some of the cases, seeing tens of different IPs accessing those accounts, leading to leaked FTP password.

The index.php is changed via FTP, and that is most visible, but also few other files are being changed to inject counter.php malware.

If you are sharing your account with others, please stop. Contact support immediately to change your password if that is the case.

NEVER post your login details on any forum or chat, and NEVER give them to any 3rd party.

 

]]>
<![CDATA[IRC Support etiquette]]> https://pulsedmedia.com/clients/index.php/announcements/171 https://pulsedmedia.com/clients/index.php/announcements/171 Wed, 26 Sept 2012 15:26:00 +0000 IRC Support etiquette

People should familiriaze themselves with IRC etiquette and how the medium functions.
Good places to start is:
http://www.livinginternet.com/r/ru_chatq.htm
http://mriet.wordpress.com/2012/06/21/proper-irc-etiquette/

So in short:

  • Be polite
  • Be patient. IRC is not an instant medium despite it looks like it (See IRC Idling)
  • Do not send personal messages to obviously busy people (ie. PM Staff), they tend to hate it
  • Don't ask your question and drop off 3 minutes after: You will never get an answer like that (See Be Patient). Most people do this, we are not watching 24/7 every single minute of the day if somebody happens to ask a question.
  • Ask your general questions in public chat, if it's a private/per account question, ticket is the only proper way
  • PM Staff gets constantly the same basic questions again and again, worst of all: In private messages. Ask in channel and/or see KB/Wiki. SEE the two points above this.
  • IRC is to hang around and to chit-chat with fellow PM users, not meant as the official support channel. Altho, most likely you will get your questions answered there if asked in public and you are patien. SEE ALL above points.
  • Most people keep IRC open 24/7 and checks what's going on a few times per day, or maybe even just every other day.
So please, come in IRC and hang around, but please be mindful to the proper IRC etiquette. In your seedbox account your irssi is already set to join the channel, and you can use screen to keep it open, and see what was going on while you were AFK (Away From Keyboard).

]]>
<![CDATA[PMSS Update: new ruTorrent]]> https://pulsedmedia.com/clients/index.php/announcements/170 https://pulsedmedia.com/clients/index.php/announcements/170 Sat, 22 Sept 2012 14:48:00 +0000 PMSS Update

As usual you can check the full changelog at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

We have now updated ruTorrent and added new plugins, highlights:

 

  • Screenshots plugin added and ffmpeg added to pkg lists
  • getfile plugin added
  • Welcome page quota parsing has been fixed for LVM based setups

 

]]>
<![CDATA[rTorrent update hickups - patch coming]]> https://pulsedmedia.com/clients/index.php/announcements/169 https://pulsedmedia.com/clients/index.php/announcements/169 Thu, 20 Sept 2012 22:43:00 +0000 rTorrent update hickups

This is exactly the reason why we do rolling upgrades - as usual there are problems.

The problems are not showstopper level however - mostly inconvenience, and patch is scheduled to be developed and begin distribution tomorrow, 21st of September.

]]>
<![CDATA[rTorrent update]]> https://pulsedmedia.com/clients/index.php/announcements/168 https://pulsedmedia.com/clients/index.php/announcements/168 Mon, 17 Sept 2012 14:28:00 +0000 rTorrent update

We have upgraded PMSS to utilize newest official release of rTorrent yesterday. Few servers have already been updated, and we are doing a rolling update as usual to catch problems our QA didn't catch. As per usual, there was some new things broken with rTorrent on this upgrade, but these were deemed to be so minor relative to the enhancements that we decided to go ahead with the upgrade.

This new version does support magnet links without a problem, loads them without a problem and not causing a crash. Earlier version magnet links worked only occasionally.

Please report if you notice any trouble after rTorrent version update on your server. You can notice when the update has been done when ruTorrent bottom bar will update to rTorrent 0.9.2/libTorrent 0.13.2.

 

]]>
<![CDATA[Choosing payment method]]> https://pulsedmedia.com/clients/index.php/announcements/167 https://pulsedmedia.com/clients/index.php/announcements/167 Sun, 16 Sept 2012 12:00:00 +0000 Choosing payment method

If you haven't noticed you can choose payment method by viewing the invoice and using the drop down to choose which payment method to utilize.

We currently accept: Paypal, OKPAY, IBAN Wiretransfers

We also accept Bitcoin payments manually, just open a ticket with billing if you want to use bitcoins for payment.

]]>
<![CDATA[Problems with PPTP not loading all pages under Windows? Here's solution]]> https://pulsedmedia.com/clients/index.php/announcements/166 https://pulsedmedia.com/clients/index.php/announcements/166 Thu, 13 Sept 2012 14:55:00 +0000 PPTP (VPN) Connection not loading all web sites?

In question is too high MTU value, this is quite easy to solve with simple single command in command line.

Unfortunately this cannot be set server side, and we will be looking at how to fix this directly server side, but for now a quick workaround is to correct the MTU value to 1250.

Click start button and type in the field "cmd", you will see a black console icon, right click on it and choose "Run as administrator".

You will then see the console and there type in this command:

netsh interface ipv4 set subinterface "VPN Connection" mtu=1250 store=persistent

Replace VPN Connection with what name you used for the VPN connection, you can also view the connection with:

netsh interface ipv4 show subinterfaces

 

]]>
<![CDATA[OKPAY Support has been dropped]]> https://pulsedmedia.com/clients/index.php/announcements/165 https://pulsedmedia.com/clients/index.php/announcements/165 Mon, 30 Jul 2012 17:40:00 +0000 OKPAY Support has been dropped

Due to OKPAY making it practically impossible to get our account verified with ridiculous demands, only way to get registered with any sensible cost would have been to incorporate in United States.

The verification process is so far the most hardest which is impossible to satisfy as a Finnish company. Verification process which we thought to be hard before seems child's play compared to this one.

Therefore we are dropping OKPAY support for that reason and several others:

* Hidden premium on Bitcoin payments
* WHMCS Module is broken and automatic accounting of payments does not work
* The actual fees are rather high

On the plus side their customer service is actually reachable, shame they choose not to be helpfull.

New alternative payment methods will be introduced in near term.

One such alternative was Bit-Pay but their customer service is nearly inaccessible and we found within a quick review multiple potential fatal flaws in their WHMCS API implementation. We are not sure does these problems extend to their internal system. As a customer we noticed several usability flaws in their system. Due to the security concerns we opted not to utilize Bit-Pay.

Payza, formerly known as AlertPay we are going through the verification process and will implement after we ensure complete usability.

 

]]>
<![CDATA[Today's downtime was an oldschool attack]]> https://pulsedmedia.com/clients/index.php/announcements/164 https://pulsedmedia.com/clients/index.php/announcements/164 Mon, 30 Jul 2012 10:57:00 +0000 Today's downtime has been solved

Today's attack was an rather old method: Mail bomb.

Curious part is that how it passed google filters, as we use for support emails and such google for proper spam filtration. Not only did it pass Google's filters, search could find the threads but in-built custom filters didn't? Thousands upon thousands upon thousands of e-mails in very short period and neither Google nor Yandex had any filtration for such an occurence.

Further it stuns me how WHMCS lacks any protection what-so-ever too, WHMCS not only checks on these bombed mails, then sends a reply, but POP processes keeps on piling up: There is no locks, there is no "alive"/"already running" methods, nothing to check if the e-mail import job is already running nor even rudimentary filtration, ie. threshold of e-mails from single address/domain/ip.

We'll need to code our own layer on top of WHMCS e-mail import to add these checks to avoid those processes from piling up.

The end result of this mail bombing was that the system was overwhelmed with the quantity of processes, which in their turn overwhelmed mysql. All the errors and e-mails then resulted in system partition filling up and further stopping from anything working.

What stuns us most is that this 90s style attack can still work today, with all the work against spam, with all the tools available to us, it still gets through and is viable attack vector. To stop things like this we specifically chose Google as the e-mail host for pulsedmedia.com domain accounts, thinking their filtration system is so excellent and they have a lot of hardware online to stop even the largest attacks of this kind. This attack was rigged for exponential growth by solely utilizing yandex, google and our resources.

The situation is now alleviated and some failsafes has ALREADY been implemented, but more will be implemented in near future.

We are very sorry for the inconvenience this may have caused to you.

]]>
<![CDATA[Bandwidth graphs]]> https://pulsedmedia.com/clients/index.php/announcements/163 https://pulsedmedia.com/clients/index.php/announcements/163 Sat, 28 Jul 2012 15:34:00 +0000 Bandwidth graphs

We do not usually show these kind of metrics, but they are indeed followed and monitored :)

We took a small random selection of 2009+ Starter and 2012 series servers, and here are the graphs. As you can see the very high value of our services and the stable performance. Some of the variation is due to users changing on the servers, and these graphs do not show short bursts as they are per week or per month.

2009+ Starter

2012 Series

 

]]>
<![CDATA[Got a site? A lot of people listen to you? Try our affiliate program!]]> https://pulsedmedia.com/clients/index.php/announcements/162 https://pulsedmedia.com/clients/index.php/announcements/162 Fri, 04 May 2012 11:25:00 +0000 We have an affiliate system which pays out GREAT! 7.5% recurring revenue.

That's quite a great earning rate, especially when it's recurring and also from further sales of the referred customer.

So try it out now!

 

Minimum withdraw amount is 40€/AUD/USD, which you may get as credit or as a cash payment. You can thus utilize to renew your services with this or take it as a cash. Affiliate payment delay is only 15 days, so you get your earnings quite fast after a payment.

]]>
<![CDATA[New wiretransfer payment account]]> https://pulsedmedia.com/clients/index.php/announcements/161 https://pulsedmedia.com/clients/index.php/announcements/161 Sun, 22 Apr 2012 14:15:00 +0000 New payment account

Please note that if you are using SEPA wiretransfer to pay your invoices our bank account has changed to OP Pankki, and you should use this account in future.

]]>
<![CDATA[Affiliate earnings!]]> https://pulsedmedia.com/clients/index.php/announcements/160 https://pulsedmedia.com/clients/index.php/announcements/160 Sat, 21 Apr 2012 12:10:00 +0000 Did you know that you can earn with us 7.5% recurring revenue?

We offer an affiliate system with 10€ signup bonus, 40€ withdraw limit and 7.5% commission from recurring payments!
It doesn't even need to be your referred sales, as long as the customer signed up via using your affiliate link all his or hers further orders will also give you 7.5% commission! That's quite cool!

Assume you refer total of 10 guys, which buys servers and high-end seedboxes for an average monthly charge of 30€ that gives you 22.5€ per month efforless, recurring income!

But if you are a serious website owner with say 500k unique monthly visitors, if even 0.5% of these guys purchase, even more from the lower end for 20€ average that is 2500 sales for a wicked awesome 3750€ monthly recurring revenue! Now *HOW COOL IS THAT*? Just imagine if your website also happens to be for target audience, techies, P2P afficionados etc.!

Payouts via PP or SEPA wiretransfer upon request monthly or twice a month.

Extra support for serious referrals, such as creating banners, campaigns, promotions etc. to further increase your earnings.

]]>
<![CDATA[Plenty of PDS-24G 1Gbps in stock]]> https://pulsedmedia.com/clients/index.php/announcements/159 https://pulsedmedia.com/clients/index.php/announcements/159 Sun, 01 Apr 2012 22:29:00 +0000 PDS-24G 1Gbps in stock

We currently have plenty of PDS-24G servers with 1Gbps network connection & speed in stock, all of which also has upgrade on Xeon series CPU.

Price is 79.90€ per month.
Get yours now!

]]>
<![CDATA[rTorrent version update pending]]> https://pulsedmedia.com/clients/index.php/announcements/158 https://pulsedmedia.com/clients/index.php/announcements/158 Tue, 13 Mar 2012 16:32:00 +0000 rTorrent version update pending

We are now at final steps for testing of next rTorrent version to be used - we are choosing between official 0.8.9 and a few updates newer repository version with some bug fixes, trying to choose the most stable version.

8th of October, 2011 version had issues with autotools, while official release from 4 months earlier has these working, but there is important bug fixes between the two versions. You can see the change log at: http://libtorrent.rakshasa.no/log/

Final decision will be done by end of week, and some Debian 6 specific updates will be done at the same patch. Servers with rTorrent 0.9.0 will be prioritized on patching, and it will be a rolling upgrade, few servers at a time only updated like usual.

We are starting to deprecate Debian 5 support and looking forward to targeting Debian 6 only. It's not big changes, just few paths, and couple extra steps during installation which most be done for Debian 6. We are targeting that by mid April Debian 6 is the primary distribution for PMSS.

]]>
<![CDATA[rTorrent version]]> https://pulsedmedia.com/clients/index.php/announcements/157 https://pulsedmedia.com/clients/index.php/announcements/157 Mon, 05 Mar 2012 16:50:00 +0000 rTorrent version

During last month's patch we opted to use the latest rTorrent version available, 0.9.0 with libTorrent 0.13.0. What else would make sense to choose?

Little did we know that many trackers do not accept the latest rTorrent version, autotools are broken with it among other things.

We have attempted to change it back to 1-step older version, which does compile just fine, but does not work. It is missing some code required for it to work, thus upon start crashing on an error.

We are now considering the option to downgrade all the way back to ~1½ year old version, or start testing different SVN version to find a version which works.

If you can suggest a version combination which actually does work as expected, please do not hesitate to contact support and let us know your opinion. After we have chosen and done Q&A testing for new chosen version it will be distributed to servers within 72hrs.

The problems caused by something as simple as this is the reason why we opted not to push rTorrent updates for a very very long time. Past experiences with rTorrent updates has shown things get broken left and right in each update.

]]>
<![CDATA[pulsedBox working with 2012 and a short downtime with pulsedBox logins]]> https://pulsedmedia.com/clients/index.php/announcements/156 https://pulsedmedia.com/clients/index.php/announcements/156 Thu, 01 Mar 2012 15:40:00 +0000 pulsedBox updates

PulsedBox has been updated to work with 2012 series offerings now, so you can use it conjunction with 2012 servers.

During this time we had to do some DNS debugging, which resulted in a short downtime for pulsedBox logins. Entries not found resulted in redirection to search.com via adding .com at the end of the query, for certain tools only. Seemingly causing a DNS poisoning attack. In the end it's OS resolver, which without a search line in resolv.conf adds the .com in the end, not bind itself. This was not happening on any other machine so it was quite curious and took a while to find out why this was happening.

]]>
<![CDATA[Setting reverses on massive scale?]]> https://pulsedmedia.com/clients/index.php/announcements/155 https://pulsedmedia.com/clients/index.php/announcements/155 Tue, 21 Feb 2012 14:34:00 +0000 Setting reverses on massive scale?

Some of our customers have a lot of IPs, and some of these require reverses set for all their IPs. Now you can submit to us a list of reverses you want to be setup, even for hundreds of IPs at once.

]]>
<![CDATA[Stock status updates, ending of US Premium VPS]]> https://pulsedmedia.com/clients/index.php/announcements/154 https://pulsedmedia.com/clients/index.php/announcements/154 Mon, 20 Feb 2012 20:03:00 +0000 Stock status updates

We have just updated the stock status for various services.

For some services we are considering adding a lot of capacity by early next week.

Deliveries currently are slightly backlogged as new servers are being setup.

2009+ Starter series all servers are full, same goes for 2012 (Except 1 vacant 2012 Large slot) so these services are backlogged until new servers has been setup and tested. Normal schedule for these should continue by end of week.

PDS series we got only a few spare before next batch of servers arrive later this week. We are hoping that by mid-march we can again deliver PDS-2Gs within 24hrs as in the past.

Ending US Premium VPS Line

US Premium VPS line of services was never with high demand, therefore it will be discontinued, and sales for the series has been stopped. Migration of existing US Premium VPS to the PVS series has already begun. Servers will be shutdown by latest 1st of April, if not 1st of March.

]]>
<![CDATA[Restoration state]]> https://pulsedmedia.com/clients/index.php/announcements/153 https://pulsedmedia.com/clients/index.php/announcements/153 Fri, 17 Feb 2012 19:08:00 +0000 Restoration state

15 servers has been reinstalled, 18 has been confirmed to been affected in total, so only 3 servers remain to be reinstalled.

If your server was one of those affected according to our lists, a ticket has been opened already for you. You will receive e-mails for new login credentials etc.

Also, this does not affect seedboxes, vps, 1gbps servers, and newer personal dedicated servers. Potentially affected services are only PDS series and dedicated seedbox series, to a limit. Access was not granted to all servers with this tech administration account.

]]>
<![CDATA[Restoration going really fast]]> https://pulsedmedia.com/clients/index.php/announcements/152 https://pulsedmedia.com/clients/index.php/announcements/152 Fri, 17 Feb 2012 18:17:00 +0000 Servers restored really fast

Only 15 servers has been confirmed to have been affected by this so far. 8 of these has been already reinstalled + new logins e-mailed. Remaining 7 are reinstalling currently. A few is having issues with netbooting so they are delayed and being inspected by DC technicians.

So according to currently confirmed status the reach of the breach was a tiny fraction of all servers, very serious, but fortunately just a small subset is affected.

Please contact support if you suspect your server has been affected and if your server was affected but you've not received new details within 2hrs.

 

]]>
<![CDATA[Security breach via a mail forwarder]]> https://pulsedmedia.com/clients/index.php/announcements/151 https://pulsedmedia.com/clients/index.php/announcements/151 Fri, 17 Feb 2012 16:36:00 +0000 Security breach via a mail forwarder

We have an discreet account for tech admins for our dedicated server customers, this way our dedicated customers have limited access, discreet logging, and important messages can be easily forwarded to our helpdesk.

Unfortunately, this e-mail account also had e-mail forwarded to the same ex-employee who took action in late November against us, his forwarded was the only one. This allowed the attacker to change login password for this tech account. Access was made 14:03 GMT from IP 123.243.98.195, and access again limited at 15:50 and all servers taken off from this tech account by 16:30 GMT.

Unfortunately during this time he had sufficient time to put several dozen dedicated servers to be reinstalled. Restoration has already begun for those dedicateds which has been reinstalled, but this process is going to take a little bit of time. So far it seems roughly 20 servers were reinstalled.

Restoration will be done as swiftly as possible, but if in case is a dedicated seedbox or windows server, those are very slow to reinstall.

Please contact support if you suspect your server was a victim of this attack, the process is first to do a hard reboot if there is no reinstallation ticket on file, and to verify the state.

All users affected will be compensated for the loss of service time.

 

]]>
<![CDATA[PMSS Bug fix release]]> https://pulsedmedia.com/clients/index.php/announcements/150 https://pulsedmedia.com/clients/index.php/announcements/150 Fri, 10 Feb 2012 17:20:00 +0000 PMSS Bug fix release

We have made an bug fix release today, fixing the most notable bugs.

  • Torrent creation on newest ruTorrent
  • Quota plugin on newest ruTorrent
  • Recycle bin for ajax file manager
Among other smaller fixes and refactoring.

]]>
<![CDATA[New dediseedboxes now unmetered]]> https://pulsedmedia.com/clients/index.php/announcements/149 https://pulsedmedia.com/clients/index.php/announcements/149 Thu, 09 Feb 2012 14:26:00 +0000 New dediseedboxes now unmetered

Dediseedboxes ordered after 9th February afternoon will be unmetered traffic @ 100mbps. Existing dediseedbox owners can open a support ticket to have their server updated to unmetered traffic at the new price.

Those dediseedboxes which have 1Gbps link will be capped at 32Tb upstream. Downstream and internal will remain unlimited.

View all the dediseedbox offers if one suits your needs! :)

]]>
<![CDATA[Known bugs in PMSS currently after yesterdays patch]]> https://pulsedmedia.com/clients/index.php/announcements/148 https://pulsedmedia.com/clients/index.php/announcements/148 Sun, 05 Feb 2012 18:26:00 +0000 Several known bugs

Some of these are older, but results in tickets (patch planned), but due to ruTorrent + rTorrent update of yesterday there is several new ones. Only a small number of servers have been updated to new ruTorrent + rTorrent.  We are working on all of these, but fix(es) cannot be done per user, or within minutes, but takes development effort. Time for this is scheduled for the upcoming week.

rTorrent + ruTorrent is by far superior software for this usage type however, but maintenance hurdles are painstaking, and upgrades on our scale are a big hurdle, not only because usually broken backwards support but also due to introduction of plethora of new bugs.

  • Browser cache has to be cleared manually due to updated JS in some cases
  • Torrent creation from ruTorrent is broken for unknown reason
  • Remove + Delete on ruTorrent doesn't work in all cases
  • Traffic plugin is broken for some users, but not all
  • ajaXplorer data deletion results in "Permission denied" in most cases
  • Quota display plugin broken due to new ruTorrent
All the ruTorrent errors are cause of upgrade breaking things (just like rTorrent updates break it's config so that rTorrent wont' even launch). They have changed things around in the core of ruTorrent which results these not working.

In details:
Creating torrents plugin: Even the old version doesn't work on the new ruTorrent. No errors given, when it does something it results in corrupted torrent. No understandable error messages are given. Defaults reportedly worked on earlier version.

Quota plugin: This is the same as before, no changes whatsoever done: Some JS function has been broken on new ruTorrent release.

Remove + Delete: No error messages, no explanation or anything. Simply broken, does nothing.

Browser cache: This would be simple solution with adding to HTML link clauses at the end of url something along: ?version=RUTORRENT_VERSION, but as the URLs are the same, browsers cache the JS files so some have to flush their cache manually.

Traffic plugin: Apparently simply does not save data for some users, but for others work. No error messages.

ajaXplorer: This is older bug (only one on the list here), and deeper rooted. Fix is in the works, but it is quite an big update.


ruTorrent doesn't create human readable (easily at least) error messages, but big unorganized and cluttered XML dumps, so reading that error log is very hard without doing it programmatically.

It does surprise us how many things got broken due to the ruTorrent upgrade, and which was not noticed on testing.

This the reason upgrades are not done as soon as released, as in general rTorrent + ruTorrent stack is maintained in a way which tends to break backwards compatibility, and other things without any obvious, logical reason. in case of rTorrent this is "just because we can break things", every single time configuration has been changed so that rTorrent fails to start because "unknown parameters", instead of giving a warning. Just as an example why it takes quite a bit of time to do these upgrades.

]]>
<![CDATA[PMSS Updates: new rTorrent + ruTorrent + plugins]]> https://pulsedmedia.com/clients/index.php/announcements/147 https://pulsedmedia.com/clients/index.php/announcements/147 Sat, 04 Feb 2012 16:15:00 +0000 PMSS Software Release 04/02/2012

PMSS Has been updated as of today, with a great deal of new features and upgrades, some very highly anticipated ones.

  • rTorrent + ruTorrent patched to latest release
  • mediainfo added
  • php5-geoip extension added: flags etc. on peer/downloader list
  • New ruTorrent plugins: cpuload + theme and others!
  • Resellers: Preliminary work for whitelabeling
Plus other small, and nice features! See the full change log at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

A small number of servers will be upgraded today, and updates to servers will be done in a rolling manner. If you want your server to be updated early, please open a ticket.

We used the official Rakshasa releases, not a SVN/GITHUB version, but release for this one. You can get the versions at:

]]>
<![CDATA[Has network performance fluctuated for you lately? If so, read on]]> https://pulsedmedia.com/clients/index.php/announcements/146 https://pulsedmedia.com/clients/index.php/announcements/146 Fri, 03 Feb 2012 15:21:00 +0000 Usually we receive very few tickets about transfer speeds, but lately the number has bumped up with other related network issues. These are still minimal, but enough to cause reason to inspect further.

So if you are experiencing packet loss, or lower than usual transfer rates currently, please submit a ticket with traceroute and ping results, along with what you were expecting to see. We will submit these to the DC for inspection. We also need to know who is your ISP.

Thanks.

]]>
<![CDATA[Electrical work maintenance window on some servers]]> https://pulsedmedia.com/clients/index.php/announcements/145 https://pulsedmedia.com/clients/index.php/announcements/145 Thu, 02 Feb 2012 16:03:00 +0000 Electrical work on a small group of our servers will be conducted during Sunday-Monday night, 23:00 GMT to midnight.

Downtime shouldn't be greater than 1 hour on these servers, and affects only a really small fraction of servers (handfull).

 

]]>
<![CDATA[PMSS Update released]]> https://pulsedmedia.com/clients/index.php/announcements/144 https://pulsedmedia.com/clients/index.php/announcements/144 Sat, 28 Jan 2012 16:37:00 +0000 This release adds some highly desired features such as:

 

  • SSL Support for web interface, simply change from http to https
  • Webdav via SERVERNAME.pulsedmedia.com/webdav-USERNAME/ ** Warning beta, your mileage will wary. Will not work with windows integrated client, for windows use BitKinex, Webdrive or some other application.
  • SFV checking via cfv
  • Removed SSL key creation from installed (Deprecated method)
  • Update creates automatically a certificate for SSL
Initial work has been begun for other improvements regarding security as well.
As usual, all you need to do is run /scripts/update.php

 

]]>
<![CDATA[2012 going strong!]]> https://pulsedmedia.com/clients/index.php/announcements/142 https://pulsedmedia.com/clients/index.php/announcements/142 Sat, 21 Jan 2012 13:30:00 +0000 Take a look at our new 2012 series offering, which offers some major disk capacity in a 1Gbps server for quite an affordable price point.

These offer far more per euro than the 2011 series, and 2012 series servers offer far more CPU power and RAM than the 2011 series.

See them at: http://pulsedmedia.com/1gbps-seedbox-2012.php

Use coupon code: 2012AnnounceJan21  for first month 25% discount!

]]>
<![CDATA[Support response times enhanced]]> https://pulsedmedia.com/clients/index.php/announcements/141 https://pulsedmedia.com/clients/index.php/announcements/141 Sun, 11 Dec 2011 13:47:00 +0000 Support response times

Due to recent staffing changes support times have been greatly enhanced, response and resolution times are a fraction of what they used to be. For example first response median has dropped down to 1hr 8minutes, which exceeds by a big margin The Gold Standard set by certain large providers for their price-included support. Infact, matching a very expensive SLA guaranteed response times of theirs.

Average is still a little bit too high to our taste at just 3hr 43minutes, but enhancement on this is coming in January with new staff hired. Closure time, meaning time between first and last reply is down to 14hr 59minutes.

All these metrics are greatly better than just a month ago, one of the best times in Pulsed Media's history, and certainly best for year 2011.

]]>
<![CDATA[100Tb 1Gbps Dedicated Servers Release!]]> https://pulsedmedia.com/clients/index.php/announcements/140 https://pulsedmedia.com/clients/index.php/announcements/140 Fri, 09 Dec 2011 08:25:00 +0000 100Tb 1Gbps Dedicated Servers Release!

We have now released 1Gbps 100Tb Traffic servers! These offer true 100Tb where 1Kilo = 1024 and not 1000, downstream is not accounted against this and internal traffic is free.

This trumps all the usual 100Tb offerings where 1000 is used instead of 1024 and upstream is accounted against the limit, and DC internal traffic is accounted against this limit as well.

All servers have 1Gbps Full-Duplex Dedicated link and are located in several different state-of-the-art datacenters in France. Extra options include extra IPs upto 256 IPs per server making these excellent platform for VPS. You may also opt for external 2Tb USB drive. Multiple different Windows versions are also available.

All servers are by default ping monitored and rebooted automatically if it is not responding to ping.

Prices start from 199.95€, but since you are reading this first month -25€ with coupon code: 100tbAnnounce  (Valid till 24th, any number of servers, limited total uses)

]]>
<![CDATA[New 2009+ Servers - stock status is GOOD]]> https://pulsedmedia.com/clients/index.php/announcements/139 https://pulsedmedia.com/clients/index.php/announcements/139 Fri, 09 Dec 2011 06:54:00 +0000 We are again able to get some new 2009+ Servers, so stock is good!

The specifications is as good as our best 2009+ servers, but they do cost slightly more. However the increase in cost will not affect end service pricing.

This service offering is still best in value on the market with it's big traffic allotment per user, generally doing at least twice as much in traffic per euro as the next best offerings, with it's good 24/7 average bandwidth throughput.

]]>
<![CDATA[PDS-2G / Small Dediseedbox - First month upto double traffic!]]> https://pulsedmedia.com/clients/index.php/announcements/138 https://pulsedmedia.com/clients/index.php/announcements/138 Wed, 07 Dec 2011 09:36:00 +0000 Up to double traffic on PDS-2G / Small Dediseedbox for the first month!

Because we have a lot of servers, and have a pool of standby servers it is possible to get upto 10Tb traffic free of charge on first month with PDS-2G - this is valid for all orders, no need for coupons or anything.

It is based on chance however, but 90% of new orders get at least 8Tb allowance for the first month!

You can view your traffic reset date and used traffic at service details page.

]]>
<![CDATA[Dedicated traffic stats]]> https://pulsedmedia.com/clients/index.php/announcements/137 https://pulsedmedia.com/clients/index.php/announcements/137 Fri, 02 Dec 2011 07:28:00 +0000 For most dedicateds you can view your bandwidth usage and link speed from services view now. Also for the same servers reboot button has now been fixed.

]]>
<![CDATA[1Gbps 100Tb and 1Gbps Unmetered servers market status]]> https://pulsedmedia.com/clients/index.php/announcements/136 https://pulsedmedia.com/clients/index.php/announcements/136 Fri, 02 Dec 2011 01:42:00 +0000 The prices with a sensible amount of storage has skyrocketed in the past 2 months, partially because of the floods causing HDD prices to skyrocket.

Minimum price for 4 HDD system has jumped by 100€ per server now for the previously offering best quality:value ratio. Some outlets still provide at the old pricing from year back but they are mostly out of stock constantly.

In other words 100Tb servers with more than 2 drives are not really available in the market place right now. Especially as WD Black drive price has doubled up or more, if you can even find them.

Also the only provider who offered 1Gbps Unmetered dedicated servers at reachable prices seems to have dissolved now, not completely unexpectedly. My personal opinion was that what they are saying and promising could not be true, unfortunately it seems i were right.

What this means is that new 2011 series nodes are unavailable for the unknown period of time. Only nodes in stock with any provider of any quality are priced over our charging rate.

Even at cheapest we would have to pay same monthly for the servers and still buy our own drives, ramping up costs to bring up a new node to roughly 1500€. In practice it would ramp up the per node costs more than 100€ per month on annual costs.

Therefore we are going to release a new 1Gbps seedbox series within next couple weeks. It will not seem as good offer but it is not without it's merits. Specifics are a little bit open still, but in all likelyhood we are going to use 2x2Tb servers, which means maximum users will be limited to 12 or below. Also these will be lower traffic limited per node, with accounting of 60Tb per server, capped at 80Tb per server. Which i believe still to be vastly more than in practice achievable in general.

The unspoken truth what 1Gbps unmetered providers do not usually talk about is that in general 99% of servers will use 60Tb or less. That is also why having plenty of drives is hard for those offers, or very expensive: They directly increase the amount of data expected to be transferred. In torrenting general rule of thumb is that 1HDD is capable of 25Tb a month max. In general more like 15Tb. Exceptions do occur however from this rule.

We are going to beta test the new offering shortly. Optimally i would like to put 16 users on such a node, but it requires testing to validate performance, that maybe simply too much even with our new measures of stabilizing performance. The problem is that a certain percentage users will try to utilize vastly more than their fair share would be, to the degree they would utilize the whole server themselves unless there were restrictions in place. They are the "power users", who auto unpack everything, uses RSS/IRC autoget to get all the latest torrents and loads up hundreds of torrents.

We have already experimented with code to restrict those users as well, that they simply cannot go beyond their fair share unless there is vacant resources, but that is not ready yet so we still have to work with limits on how many users per node.

Ultimately, we have set performance standards and anything not upto our standards on average will not be utilized in the long term.

Best Regards,
Aleksi
Pulsed Media

]]>
<![CDATA[Gift certificates available!]]> https://pulsedmedia.com/clients/index.php/announcements/135 https://pulsedmedia.com/clients/index.php/announcements/135 Thu, 01 Dec 2011 07:30:00 +0000 We have added support for gift certificates! This way you can give your friends or special someone the gift of Pulsed Media services!

You simply choose the gift certificate you want, then you can from your gift certificate listing e-mail it or simply copy the certificate code and give this to someone.

When redeeming the certificate it gets added to credit balance which you may use to pay invoices. Gift certificates with a discount you may not redeem yourself. Some gift certificates can be splitted into smaller ones with your preferred value, and higher value gift certificates even have a discount!

If you need multiple gift certificates or different value, contact sales.

 

]]>
<![CDATA[Unexpected issues during recovery]]> https://pulsedmedia.com/clients/index.php/announcements/134 https://pulsedmedia.com/clients/index.php/announcements/134 Tue, 29 Nov 2011 03:06:00 +0000 There was some issues remaining on some of the nodes, for example crontab information missing for a lot of users. For example this crontab issue will be addressed on next PMSS update, and as per tickets arise.

There's been also lighttpd configuration errors etc. For example an placeholder page. If you are having a placeholder page, try force refresh. If that does not help contact support.

So if you are having somekind of trouble with your service - anykind please do not hesitate to contact support@pulsedmedia.com

Web proxy is currently disabled due to security concerns, after these has been addressed the proxy will be restored. In the meantime you can use SSH Socks5 proxy. If you need help setting that up, you can contact support.

Please remember that support is there to help you, but we cannot help you unless you let us know there is an issue. So e-mail support@pulsedmedia.com!

]]>
<![CDATA[DNS cluster updates lagging]]> https://pulsedmedia.com/clients/index.php/announcements/133 https://pulsedmedia.com/clients/index.php/announcements/133 Tue, 29 Nov 2011 02:39:00 +0000 Our DNS cluster is for unknown reason lagging behind, this has been escalated to the admins who handle the DNS cluster to find out why zone transfers are not going through anymore.

We hope this will be resolved within the next 24hrs. In the meantime, new accounts/zones will get the occasional DNS error.

]]>
<![CDATA[Full recovery]]> https://pulsedmedia.com/clients/index.php/announcements/132 https://pulsedmedia.com/clients/index.php/announcements/132 Mon, 28 Nov 2011 01:20:00 +0000 Full recovery has been achieved in practice now. Only 2 servers remain to be restored, and both are more or less half way there. These both servers were problem cases.

Helpdesk backlog has been mostly dealt with as well. The past week resulted in more tickets & replies than a average month does. That was A LOT of tickets to handle. Despite that, average response times for the month remain on average level, however down from the exemplary response times prior to this week which were on record level. We will add several new metrics to measure from support which are shown to be vital from the recent events, such as highest, and lowest 5% times median. Only several tickets remain still to be solved from the current backlog.

Slight backlog on orders exists, but that too is generally less than 48hrs.

So i am very happy to say that by all practical means recovery is complete in big picture and we start getting back to regular schedule.

There will be slightly more things to handle for the next 2 weeks as well, as we go through all nodes again to verify them, and certainly there will be some issues from this massive operation on the nodes to be resolved. So support and provisioning schedules are a little bit prolonged, but not badly.

Our internal security practices will become even more paranoid, from an already strict, slightly paranoid level. That means a lot of development, but it's better to be safe than sorry, and experience shows that even people you implicitly trust cannot be trusted at any level.

Despite all of this, i'm hoping we can still push for christmas specials and new services during december despite development has been delayed several on those.

As always, i'm personally available at IRC shall any questions arise aside regular support. So join us at #PulsedMedia on Freenode. Just remember IRC really isn't a realtime medium, but i'm checking in multiple times per day there, so feel free to hang around and chat with other users :)

Best Regards,
Aleksi
Pulsed Media

]]>
<![CDATA[rTorrent glitch "Bad link to rTorrent" in ruTorrent]]> https://pulsedmedia.com/clients/index.php/announcements/131 https://pulsedmedia.com/clients/index.php/announcements/131 Sat, 26 Nov 2011 20:49:00 +0000 During the past few days we have uncovered a new glitch in rTorrent, causing ruTorrent unable to connect to rTorrent SCGI despite rTorrent running.

So please click the "restart rTorrent" button before submitting ticket and wait a few minutes to see if it comes back online. "restart rTorrent" button actually kills the rTorrent process, allowing the 2-level redundancies to start it again.

]]>
<![CDATA[Restoration unexpected negative effects and status]]> https://pulsedmedia.com/clients/index.php/announcements/130 https://pulsedmedia.com/clients/index.php/announcements/130 Thu, 24 Nov 2011 18:14:00 +0000 Unexpected negative effects

During restoration many of the systems get updated for newer kernel, or basicly whole OS gets updated to newer than what we have tested.

This causes that quota is even more prone to breaking down on some of the restored nodes, breaking down everytime there is a new user added or removed, causing need for constant quota repairs.

If you are experiencing quota issues this might be happening on your node too.

We will release next week a update for PMSS which will recompile quota from newer sources, which ought to fix this issue. Till then quota will be constantly breaking down on some nodes. Do not worry about this if you have reoccurring quota breakdown, you can however open a ticket notifying us that your node is one of those with reoccurring issues.

Restoration status

Only a little over dozen nodes needs to be restored anymore, some of these are already half way there.

Some of these nodes are extremely tricky to restore, but we are still hoping for full recovery within the next 24hrs.

After complete restoration we will start going through every single node verifying they are up & operational. Just a quick check, so we might not catch everything. If you having issues with your node after Saturday, please do not hesitate to e-mail support@pulsedmedia.com so we can fix that.

It's been a very long uphill battle to restore all the nodes affected but we are almost there.

]]>
<![CDATA[Restoration almost complete]]> https://pulsedmedia.com/clients/index.php/announcements/129 https://pulsedmedia.com/clients/index.php/announcements/129 Wed, 23 Nov 2011 19:37:00 +0000 Almost all nodes has been restored by now. Only a handfull of trickier ones are remaining, where we are trying to restore data to the very last effort. Some nodes still have some issues due to bug in one of the restoration scripts, but these will be attended to quite soon. For most nodes the restoration went far smoother than expected to. After this we will go through again every single server to ensure they are production quality and production stable.

We've heard your feedback as well, and in general has been very supportive in this time of crisis, if not even very positive at times.

Rest assured, this has been very teaching crisis, just because someone is your very long time friend and you implicitly trust him you should not expect it to be safe to employ and give any degree of access to servers. Our security precautions are actually somewhat stricter than the average IT company, but be assured it shall be greatly more strict in future.

 

Aleksi
Pulsed Media

]]>
<![CDATA[Bug in user directory recreation script]]> https://pulsedmedia.com/clients/index.php/announcements/128 https://pulsedmedia.com/clients/index.php/announcements/128 Wed, 23 Nov 2011 17:17:00 +0000 Bug in user directory recreation script caused that web interface would not always work.

Thus if you are getting 403 forbidden error, please contact support so we can fix it.

 

]]>
<![CDATA[Restoration progress]]> https://pulsedmedia.com/clients/index.php/announcements/127 https://pulsedmedia.com/clients/index.php/announcements/127 Wed, 23 Nov 2011 04:47:00 +0000 The servers with data loss has proven to be far more work than expected prior, especially as this was the case for most 2011 servers. 2011 1Gbps servers are also located in DCs with worse support automation, which is giving it's own set of trouble in restoration along with the fact that almost all of 2011 servers rm -rf / was executed.

However, we are at the very last stretch of restorations right now.

2009+ Nodes only couple dozen systems remain to be restored.
2011 is approximately half way done.

]]>
<![CDATA[Approaching final stretch of restorations]]> https://pulsedmedia.com/clients/index.php/announcements/126 https://pulsedmedia.com/clients/index.php/announcements/126 Tue, 22 Nov 2011 11:20:00 +0000 Final stretch is being approached. Currently mainly waiting for providers to see their support options to do in batches easier.

Only remaining nodes are those where data loss occured. Most of those cases customer data is completely intact and it's only OS files which are missing.

On other nodes we are waiting for IPMI fixes and such.

If we have any luck this should be over by tonight, leaving maximum downtime to 36hrs to those on the "worst case scenario" servers. but do not count on that, because there is still a lot of work to do and a lot of things to be found out and coordinating with multiple different DCs doesn't make things simpler.

Of course, a lot of double checking will still remain, as we don't need to just double check, but triple check that servers are in good production capacity after restoration.

Really sorry about this whole ordeal. Be assured that there will be in future great emphasis not to give this much control again to any single employee.

]]>
<![CDATA[Helpdesk slightly overwhelmed. Restoration status]]> https://pulsedmedia.com/clients/index.php/announcements/125 https://pulsedmedia.com/clients/index.php/announcements/125 Tue, 22 Nov 2011 04:30:00 +0000 Due to recent events helpdesk is slightly overwhelmed right now.

Please do not e-mail for seedbox service being down until Thursday, by which we plan to get all seedboxes back to production, or unless mentioned here in announcements that every single seedbox is up and running.

Also, if it's not urgent, it would be greatly appreciated if the question can wait a couple days as we purge the current tasks at hand and get back to regular schedule.

Status of restoration:
2009+ series ~50% done
2011 series ~10% done

 

]]>
<![CDATA[Restoration on servers with data loss]]> https://pulsedmedia.com/clients/index.php/announcements/124 https://pulsedmedia.com/clients/index.php/announcements/124 Mon, 21 Nov 2011 19:39:00 +0000 UPDATE: This does not concern dedicated servers, only shared seedbox servers. All dedicated servers are untouched, some only briefly put into rescue mode by overzealous filtering on automation.

For some servers "rm -rf /" was executed, fortunately most of the time this didn't have time to complete even part way, so restoration without data loss is possible, if not almost trivial in some cases.

Further, we have just outsourced some of that work to a reputable 3rd party company to speedup the process. They are right now looking into how they can do direct in-place ext4 undelete in the rescue mode environment we have. Let's hope they manage to do that.

Last option is to "rsync" new operating system files in place, and then manually verify working condition.

Current estimate is that only in couple servers actual customer data loss occured, from which some were actual hardware hdd failures brought into light on the reboot. We are hoping even on those couple servers we can restore all data.

However, on these servers the restoration WILL take quite a bit of time as it's quite an intensive process, and thus are on lower priority than those we simply have to verify they are in perfect working condition.

The estimation for complete restoration for all nodes, for all types of seedbox services, regardless what provider is being used (some are harder to work with than others) remains at 3 days.

In a case where you have to wait more than 24hrs for restoration 3 days extra service time will be given, if it takes full 3 days you will be given 5 days extra service time. This is an extra 2 days on top of our normal SLA readable at: http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Seedbox_SLA_Policy

On top of that we are now considering enacting full weekly backups on even the cheapest of SB plans on our under development storage cluster if we can make that financially feasible.

]]>
<![CDATA[Services restoration under way, most quickly restored]]> https://pulsedmedia.com/clients/index.php/announcements/123 https://pulsedmedia.com/clients/index.php/announcements/123 Mon, 21 Nov 2011 17:49:00 +0000 A lot of servers has already been restored.

All dedicated should be up and running. All VPS if not already running should be running within next hour. No danger ever occured for any dedicated servers. VPS nodes are safe as well.

Most seedbox servers are quickly restored, but there are some which will require a lot more work and data loss is likely. For those nodes we expect turnaround to be a few days, and all users will be fully compensated as per our SLA policy at http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Seedbox_SLA_Policy but in this case we will give +2 days extra automatically, as it's clerical error for not being paranoid enough about ex-employees.

Please note that this happened because of a gruntled ex-employee. All his access was revoked late last week, but he apparently had a private copy of a server password database for bulk of Seedbox servers. His online access had already been revoked to this said database.

As far as we can tell by logs and crawling through data on the servers only action taken is to change password on bulk of servers, and a few has been noted that "rm -rf /" was executed. This ex-employee has been noted to be bragging about this action in several forums.

Lesson learned, in future no employee shall be granted full server access unless a special case warrants it, and only in a way access can be revoked quickly. Personally, i implicitly trusted this person and have known him for roughly 11years so i was fully expecting that professionally we can part our ways gracefully.

Dedicated server passwords are a separate database, so he did not have access to them. Same goes for VPS nodes.

If your dedicated server is down for some reason, please do not hesitate to open a ticket. All VPS instances should be up & running within the hour if not operational already. Dedicated & VPS were brought to rescue mode only because of a too greedy filtering on automated rescue mode change.

Several electrical and hard drive failures were detected on this, and DC is already working on them, if not fixed already.

Sorry for all the inconvenience caused,
Aleksi
Pulsed Media

]]>
<![CDATA[Most servers currently under live rescue mode]]> https://pulsedmedia.com/clients/index.php/announcements/122 https://pulsedmedia.com/clients/index.php/announcements/122 Mon, 21 Nov 2011 15:19:00 +0000 Most servers are currently under live rescue mode due to actions of a disgruntled ex-employee for review.

You will be emailed if anything curious about your server, and we will be bring node by node each back online.

]]>
<![CDATA[Extra bandwidth options for PDS series]]> https://pulsedmedia.com/clients/index.php/announcements/121 https://pulsedmedia.com/clients/index.php/announcements/121 Fri, 14 Oct 2011 12:29:00 +0000 Extra bandwidth options for PDS series

Extra bandwidth options has been added for PDS series and other French servers.

You can upgrade via client portal, and choose upon order if you want extra traffic.

For 100Mbps unmetered node, PDS-2G will cost 66.90€ on monthly basis or 61.28€ per month on annual basis.

]]>
<![CDATA[Next PMSS version]]> https://pulsedmedia.com/clients/index.php/announcements/120 https://pulsedmedia.com/clients/index.php/announcements/120 Fri, 16 Sept 2011 13:35:00 +0000 Next PMSS version

We have now progressed into testing on fresh Debian 6, and only thing remains to be tested is Debian 5 upgrades.

This is a major upgrade on the backend, introducing new features for better user isolation giving us better control over per user resources.

This should stop people who greatly abuse the disk i/o in their tracks by simply not giving them time unless no one else is requesting that time. By default this is a per process thing, which doesn't work quite well, running I/O latency sky high even tho they should have equal or lesser priority than other processes. Add several of this kind of processes and you might have a server unresponsive even to SSH. It is a rare issue, but major nevertheless.

By this new version we are isolating users to their own I/O domains, not per process. We are also able to control the I/O weight on per user basis, meaning we can give higher priority to those with bigger packages directly, not just indirectly.

Overally this is going to be a stability release, and there is no significant user facing changes at all.

This is just a first step on a roadmap of major improvements on the PMSS system.

]]>
<![CDATA[2009+ Stock Status]]> https://pulsedmedia.com/clients/index.php/announcements/119 https://pulsedmedia.com/clients/index.php/announcements/119 Wed, 14 Sept 2011 23:52:00 +0000

2009+ Stock Status update

Due to people upgrading their old servers to newer models with traffic limits for personal use, we have some available stock again on 2009+ series! So much infact, that we decided to drop the monthly price of 2009+ Small by 1€.

Current standing is:

2009+ Large: 5
2009+ Medium: 16
2009+ Small: 29
2009+ Starter: 9

]]>
<![CDATA[Stock status]]> https://pulsedmedia.com/clients/index.php/announcements/118 https://pulsedmedia.com/clients/index.php/announcements/118 Tue, 06 Sept 2011 11:01:00 +0000 Stock Status

The stock status is good again for the 2009+ Series! Since release of PDS-2G a lot has been changing their server to these form the type of servers used for 2009+, which vacated quite a few machines. Most are not setup yet, so stock status does not reflect that, but they can be setup within 24hr period if we run out :)

For 2011 the situ is not that much better, a few cancellations happened which vacated a few slots but that's it. Let's hope we can acquire new server by end of month, as provider has been hinting to the possibility that mid this month should be available again.

You can see the stock status on the service page or all services page :)

]]>
<![CDATA[PMSS2 Progress, features being tested]]> https://pulsedmedia.com/clients/index.php/announcements/117 https://pulsedmedia.com/clients/index.php/announcements/117 Wed, 31 Aug 2011 09:30:00 +0000 PMSS2 Progress

We are nearing testing phase for some key new features this week.

We are now testing jail shell environment, meaning each user has their own private space, and could even compile custom programs for themselves. It's a complete separation from other users on the same node.

Secondly we are testing user I/O controlling, basicly a scheduler on a per user basis which ensures that every user gets their fair share of the I/O time on underlying storage.

and thirdly we are testing bandwidth monitoring measures on a per user basis, giving us ability later on to offer traffic limited accounts.

Testing for the remote API will proceed during next week, it should pose no problems and work but you never know.

We have also semicompleted continuous integration patch code for these features to be migrated to existing PMSS nodes. Lighttpd to Apache swapover is going to be delayed.

New configurable option during user setup will contain I/O weight, but traffic and rate limiting options will be bypassed for now.

]]>
<![CDATA[PMSS2 integrates to PMSS? We released an PMSS update]]> https://pulsedmedia.com/clients/index.php/announcements/116 https://pulsedmedia.com/clients/index.php/announcements/116 Sat, 27 Aug 2011 07:34:00 +0000 PMSS New release 25th of Aug

We released an update which adresses several issues and adds several new features, check out the full changelog.

Most important changes are:

  • Basic work for remote API done
  • Some redundancy timers increased slightly to make certain some actions have completed
  • Stops trying to install vnstat if it's installed already
  • Lighttpd idempotent configurator finished and taken into production
  • terminateUser is feature complete with lighttpd reconfig now and easier automation support
  • Installer simplified, restructured. Easier & faster installs.
  • Browsing to server URL now results in redirect to pulsedmedia.com
PMSS2 Integrates to PMSS?
We have begun work to integrate PMSS2 features in to PMSS, meaning there will be no new major release, but it's all part of continuous integration. Some features will not work on Debian 5 based systems probably, we will see if we can compile these from source along with custom kernel so we can roll these features into Debian 5 nodes as well.

We are now nearing testing stage to add some really cool features, but those features are behind on the scenes, so no visual updates just yet. Tim has been hard on work to make these features and now integrating them on the PMSS system.

We are making PMSS even more advanced than ever before, and will begin to support resellers way better. If you got feedback about PMSS, don't hesitate to open a ticket and let us know about it! :)

]]>
<![CDATA[Country, VAT, Chargebacks]]> https://pulsedmedia.com/clients/index.php/announcements/115 https://pulsedmedia.com/clients/index.php/announcements/115 Thu, 25 Aug 2011 10:20:00 +0000 Choosing your Country, and VAT

There has been a surge in "VAT cheats" again, we just corrected several accounts.

Choosing another country than your real country is not guaranteed to help you much, as we do check these. Not every account gets a fine degree combing through, but sufficient to say that our "Tax cheaters" ratio is very minimal.

If we notice this, we add a correction fee of 3€, the VAT, and mark *all invoices* unpaid, and roll back service due date as much, sending reminder of all invoices which remain open. We also add that correction fee for every invoice. Generally this is just 1 invoice as we usually catch these straight away.

The harm it does!

It harms everyone, it harms us, it harms you, it harms other customers. How does it harm?

We have to comb through accounts, which raises costs.
You will incurr extra charges, and potentially account and service termination effective immediately without a recourse in worst cases (those who redo it over and over).
We have to raise prices, so everyone has to pay more, due to the added workload.

If we do not do spot checks, and do our best to stop this behavior -> You are exposing us to be liable for Tax fraud and Tax evasion charges.

Alternative is to include VAT in the sum, and charge VAT from everyone, try to explain this to your Australian or US friend that how you wanted to skim 2€ all your outside EU friends have to now pay few euros more. You wouldn't want that, now would you? ;) Fortunately, that's the ultimate last resort, tho some of our competitors have resorted to that.

We do not want to do things like that. All we ask that people respect their own country's tax laws, they have made a mutual agreement with other VAT region countries that residents in that region should be charged VAT.

CC Chargebacks

Needless to say, it's immediate, swift, account termination and service termination. Don't do it, ever.

Want an refund? Read our refunds policy! Which is actually way more than 95% in our niche provides (plus our SLA too!), and conforms to strict Finnish laws about consumer protection, which is one of the best for consumers in the world.

We already got quite good policies covering this, you should read them upfront!

Erroneous extra payments are also automatically and immediately added as credit which you may use as you please in future. Even if you double pay one invoice, you are not double paying in the long run!

So, please, please, show us the same respect and courtesy as we do for you, don't go behind our back and make a CC chargeback or dispute. That's expensive process, and actually hurts your credit score. Yes, that is right. They write down every single chargeback claim, and that is a negative mark when you are applying for that mortgage, car loan or credit limit raise!

So, check out our refunds policy, and if it's a match, just e-mail billing@pulsedmedia.com for your refund :)

]]>
<![CDATA[PDS-2G: Windows 2008 Web for 4.95€]]> https://pulsedmedia.com/clients/index.php/announcements/114 https://pulsedmedia.com/clients/index.php/announcements/114 Mon, 22 Aug 2011 05:30:00 +0000 PDS-2G: Windows 2008 Web for 4.95€

Windows 2008 web is now available for PDS-2G nodes at just 4.95€ per month!

If you want to change your OS on existing node, contact support. For new servers choice is presented on order page.

]]>
<![CDATA[Occasional paypal error fixed]]> https://pulsedmedia.com/clients/index.php/announcements/113 https://pulsedmedia.com/clients/index.php/announcements/113 Fri, 19 Aug 2011 11:47:00 +0000 Occasional paypal error fixed

For a few days since we upgraded our billing system to latest version for some people paypal payments returned only an error indicating error on our side.

Paypal had made changes to their payment processing, so that whenever a locality was passed, and it was unrecognized by paypal it would throw out this error.

WHMCS staff was kind enough to provide an patch to this, removing the locality data which caused the error.

Paypal payment glitches due to this should be gone now. Contact support if errors persist, or you notice any other errors.

]]>
<![CDATA[Billing system update]]> https://pulsedmedia.com/clients/index.php/announcements/112 https://pulsedmedia.com/clients/index.php/announcements/112 Wed, 17 Aug 2011 02:09:00 +0000 Billing system update

We have upgraded our billing system to latest and greatest, and included some new automation on this update.

Upgraded billing system has a big list of small enhancements and fixes, and some new developer features we are going to be utilizing in near future (expect to see our site get an update!).

At the sametime we added PDS servers automation, reboot option is coming to your service details page for your dedicated server!
If you do not see a reboot button, or it does not work within couple of days, please contact support.

Report any problems or glitches you see with the upgrade, and we'll take a look into it!

Thanks,
Aleksi

]]>
<![CDATA[2009+ Almost out of stock!]]> https://pulsedmedia.com/clients/index.php/announcements/111 https://pulsedmedia.com/clients/index.php/announcements/111 Thu, 11 Aug 2011 04:05:00 +0000 2009+ Almost out of stock!

2009+ seedboxes are almost out of stock and no new servers are in sight for 1-2 more months! A few still remains available however, so act fast if you want to grab one!

Current availability is:

  • 2009+ Starter: 4
  • 2009+ Small: 9
  • 2009+ Medium: 6
  • 2009+ Large: 3
  • 2009+ X2 Small: 4
  • 2009+ X2 Medium: 2
  • 2009+ X2 Large: 2

]]>
<![CDATA[Great progress in July]]> https://pulsedmedia.com/clients/index.php/announcements/110 https://pulsedmedia.com/clients/index.php/announcements/110 Mon, 01 Aug 2011 23:39:00 +0000 Great progress in July

We've made great progress in July, despite generally July being the slowest month of the year for businesses in general, for us it's a busy month!

New services were released, PDS series dedicated servers starting at 19.95€ a month, and dedicated managed seedboxes based on those servers, and they proved to be a great success.

Most importantly, support response times has been good in July, reaching levels of last summer.

First response average time was close to best times ever, as was median time. Closure time was also one of the best ever. Last summer we were slightly faster, but back then we were also A LOT smaller company, we have grown since August, 2010 by 520%! Yet we've able to keep support times close to the same as back then, which is very staggeringly good. Last month we received 715 tickets and made 878 replies, so most of the time ticket is immediately solved, with a tickets to replies ratio of 1:1.23. Some tickets do take 10+ replies, but they are usually VERY rare questions in nature, presales and customized requests etc.

We expect to get even faster at ticketing as we grow, the 23 tickets per day average does not justify yet a full time support personnel. At level of 400+ tickets per day we can have true 24/7 support staff in 3 shifts. That will leave about 3 minutes per ticket, for efficient work time of 6.5hours out of 8hour shift. At level of 90 tickets per day there already has to be one full time person handling support. Preferrably at just 60 tickets, leaving 6minutes per ticket average. Currently there is 2 of us handling tickets every day, couple hours each.

We've also made great strides in enhancing quality of service to lower need for ticketing, we are receiving now under half the tickets per active service average than we were getting a year ago! Still there is a lot to work on, we are still far from our goal of 0.08 tickets per active service per month. That is an immensively hard goal to achieve, and will require immensive amounts of R&D, especially when support is easier and easier to reach.

Many big companies achieve way lower ratio than that, but they do it by making ability to reach support as hard as possible, you might wander their site for an hour to find a contact form, which is simply to realize it returns an automated message suggesting KB articles, and tiny print to reach an actual person. But not us! We want to make support as easy to reach as possible, and work on quality of service to lower the need for support to the point we achieve very good ratio.

Making support easy to know keeps us on top of things, knowing if there is usability, stability or performance issues as fast as possible, from the user's perspective, which allows us to make better judgements of what is the most important tasks for development.

If you didn't already know, we also offer an Service Level Agreement (SLA) for our seedboxes! No one else does this for seedboxes, but we do!

 

]]>
<![CDATA[2009+ Stock Status update]]> https://pulsedmedia.com/clients/index.php/announcements/109 https://pulsedmedia.com/clients/index.php/announcements/109 Mon, 11 Jul 2011 16:00:00 +0000 2009+ Stock Status update

We have again managed to make some more available :)

Current availability is mediocre, and for anyone wishing one we should have a slot available for the next 30-40 days.

Current standing is:

2009+ Large: 6
2009+ Medium: 20
2009+ Small: 35
2009+ Starter: 16

So we decided to again give a small promo :) Any payment term (monthly or longer), permanent 10% off from the list price :) Limited time only! Limited uses! This coupon will be valid only for the next 2 weeks, until end of 25th of July and uses limited so that not everything gets sold out :)
Use coupon code: UPDATE-10P

Let's hope we can get more servers of this type to keep the prices low and the value extremely high! :) If you didn't know already, even 2009+ Starter transfer can be in the terabytes upstream a month (we've seen ~2Tb up a month on a 2009+ Starter, and in most cases above 1Tb).

2009+ is historically so extreme value that it's to the detriment to our other offers, many swap back to 2009+ Small or 2009+ Medium from larger packages, or in some cases from dedicated servers due to the extreme value offered.

]]>
<![CDATA[Introducing Managed Dediseedbox! 2Gigs Ram, 1Tb HDD, DEDICATED 100Mbps connection!]]> https://pulsedmedia.com/clients/index.php/announcements/108 https://pulsedmedia.com/clients/index.php/announcements/108 Wed, 06 Jul 2011 02:16:00 +0000 Introducing Managed Dediseedbox!

Managed Dediseedbox for that dedicated performance, without dedicated server price!

2Gb RAM
1Tb HDD
100Mbps Dedicated Connection
5Tb External traffic + Unlimited, Unmetered Internal
Pulsed Media Software Stack - FULLY MANAGED
Root access: Upon request via ticket

This is excellent if you want to share your seedbox with your friends, need dedicated performance or are looking to resell our services conveniently! You can create as many users as you want, and will have custom pulsed media hostname etc.!

Price 29.95€/Month 19.95€ for the first month!

Use coupon code: MANAGED-DEDI-INTRO

Coupon valid until Monday, 11th of July.

GET YOURS NOW! :P

 

Details:

We will setup your requested servername.mds.pulsedmedia.com address along with number of user hostnames (default 5, user0 to user4. Custom and more upon request).

You can request us to make server updates, create, suspend, unsuspend, terminate users to a sensible limit, or you can request root access to do this yourself (special terms regarding management, ask details from support).

No administrative ticket fees, no fuss, for basic tasks: Create/Delete/Suspend/Unsuspend/Reconfigure (resources) user, package/PMSS update, creating new user hostnames, setting custom reverse, reboot server, reinstall server. We will handle it all for you so you just get a working seedbox, just like our 2009+, 2009+ X2 and 2011 Semidedi/Shared offers!

Resellers can concentrate on selling. Friends can conveniently share a box.

Custom domains are supported. Root access upon request.

 

]]>
<![CDATA[2009+ Stock Status]]> https://pulsedmedia.com/clients/index.php/announcements/107 https://pulsedmedia.com/clients/index.php/announcements/107 Wed, 29 Jun 2011 14:45:00 +0000 2009+ Stock Status

Some slots became available, availability status as of today:

2009+ Large: 2
2009+ Medium: 16
2009+ Small: 10
2009+ Starter: 14
2009+ X2 Large: 1
2009+ X2 Medium: 0
2009+ X2 Small: 2

We should be getting a few new servers by mid-july to up the 2009+ availability by a few dozen. So far we've managed to muster enough servers to keep availability, however we expect this to change. You should get your slot immediately to make sure you get to benefit of the extreme value of 2009+ services before slots run completely out.

]]>
<![CDATA[Seedbox Service Level Agreement!]]> https://pulsedmedia.com/clients/index.php/announcements/106 https://pulsedmedia.com/clients/index.php/announcements/106 Mon, 13 Jun 2011 13:06:00 +0000 Seedbox Service Level Agreement!

I am very proud to announce that Pulsed Media is the first Seedbox provide to have an SLA for their seedbox services! Covering hardware, network and support, including major service degradations.

So it's an overall SLA, which gives you the comfortability of a certain level of quality we seek to maintain and compensate if otherwise!

Read it in full at: http://wiki.pulsedmedia.com/index.php/Pulsed_Media_Seedbox_SLA_Policy

-Aleksi

]]>
<![CDATA[PMSS New Release, inc. GUI updates]]> https://pulsedmedia.com/clients/index.php/announcements/105 https://pulsedmedia.com/clients/index.php/announcements/105 Mon, 30 May 2011 05:42:00 +0000 PMSS New Release

5th release this month! So let's recap all important bits, or you can check out WIKI for detailed changelog.

 

  • Lighttpd automatic redundancy added: No more "cannot connect to server" pages in your browser!
  • Quota meter on welcome page and quota information with over quota warning
  • GUI updater checks for quota (no more blanking of GUI after update if over burst limit)
  • Automated Suspend, Unsuspend and Terminate scripts for 3rd party usage
  • Server update scripts enhanced: Fully dynamic updates are now supported.
  • Basic modules support, and architecture founded for advanced modules support
  • Default FTP connection limits added
  • Local user database collects information now (for future usage)
  • Server maintenance (root) crons are now automatically setup and configurable
  • Quota fixing now outputs usefull information
  • Security enhancements
  • Rudimentary local idempotent lighttp reconfiguration
  • Big list of bug fixes
Today we added the quota meter to welcome page, fixed bugs in the HDDQuota ruTorrent plugin(not part of automatic update yet), enhanced GUI updating code and added the testing version of local idempotent lighttpd configurator.

We intend to continue high pace of updates and enhancements with strong communication. Our focus will be enhancing mass management features and supporting 3rd party requirements better, including reseller needs.

 

]]>
<![CDATA[The fate of 100Mbps offerings currently]]> https://pulsedmedia.com/clients/index.php/announcements/104 https://pulsedmedia.com/clients/index.php/announcements/104 Fri, 20 May 2011 03:00:00 +0000 The fate of 100Mbps offerings currently

Our supplier didn't decide on just the 25% minimum price increase, but to get unmetered they actually increased the price by multiple times.

This means, to offer unlimited stock for 100Mbps unmetered servers and 2009+ we would need to raise prices by 3x or so, which makes no sense what-so-ever.

This was totally unexpected move from their behalf, and using their servers makes no sense anymore, after our vacant stock runs out.

We will keep this pricing we have as long as we can, offering as dedi has suffered i'll fate tho: We had to increase our price by ~100€, and it is unsure will we be able to get all types of 100mbps unmetered at all in to stock anymore (price unchanged until then).

So, 2009+ X2 has only a few small lots available and no more is going to come in the foreseaable future. 2009+ has about 120 slots available, and again, no more is coming into the foreseeable future.

2011 continues on Limited, but good availability, so move to 1GigE as our main offering is inevitable during the upcoming months.

After 2009+ stock has been sold out, along with 2009+ X2, we will increase price somewhat to make room for those who really want it. We are also migratings servers, doing consolidation, and trying 3rd party acquirements to see if we can get our hands on some more servers.

However, we expect currently fully sold out status by end of July even after all the consolidation, acquirement and migration attempts. After which it is possible that mainly only 2011 series and VPS is in good availability.

After we have exhausted all our options to get more stock, new orders will have increased pricing for 2009+ and 2009+ X2 first by approximately 20%. Old customers will get to keep their grandfathered pricing.

Update 21/05/2011 11:28GMT:
Slots are going far faster right now than expected, if this pace continues we will run out of stock in just 2 weeks, unless we find new servers to take into usage.

Update 21/05/2011 15:26GMT:
We have been forced to already increase pricing of medium & large, as we have to use also older server models now and these plans will not even break even with annual payments using them, esp. with upgrades.

]]>
<![CDATA[Continuation of 2009+]]> https://pulsedmedia.com/clients/index.php/announcements/103 https://pulsedmedia.com/clients/index.php/announcements/103 Wed, 18 May 2011 13:03:00 +0000 Continuation of 2009+

As technology evolves and networks are built, offerings evolve, but sometimes things also are ahead of their time.

This is the case for our current evolution of 2009+ services, which offer extreme amounts of traffic per euro. Unfortunately, that also means our provider is going to increase price for future servers by 25%, along with other price increases. This means that not only will 2009+ price dramatically increase when this happens, but also our 100Mbps unmetered servers.

We are trying to buffer this by acquiring vacant servers for future growth, but eventually we will run out of servers and are forced to increase our pricing as well to compensate for the difference. We will however try to make this an gradual change, and an opportunity to better balance the cost difference between different plans. This does make no difference for existing customers, as they get to keep their pricing but for new orders of the plans. The price of 100Mbps unmetered servers will raise by 10 to 20€ across the board. Prices of 2009+ seedboxes will likely increase 1 to 10€ across the board. So if you want to solidify & ensure lower rates for yourself, now is the time to order.

This also means we are considering a new line of services, targeted at the highest possible quality. Your feedback will be highly appreciated on this subject, so if you have something to weigh in, please do not hesitate to contact us.

The proposal is to use higher end servers with 16G Ram, and RAID10. This would make the service even more responsive than 2009+, but also introduce upstream bandwidth limits based on plan, starting from 10Mbps upstream to 100Mbps. Due to higher cost ranging from 4 users to 24 per server. This would mean approximately 1.25Tb to 7.5Tb traffic per user, on average. However, you would gain the ability to have faster archiving, and other data manipulation such as re-encoding your videos along with redundant storage, thus data losses due to failing hardware become really unlikely. With this kind of servers, we might even be able to add remote desktop included as part of the service package, fully fledged and fully featured. Pricing would be something like 9.95€ for 80Gb/~1.25Tb transfer, to 29.95€ for 500Gb/~7.5Tb transfer.

That would exchange some traffic for reliability, stability and general performance. Without redundant storage we could double up the quotas, however likely with slightly decreased overall performance vs. 2009+, with no increase in reliability or stability.

So, please give your feedback and what would you like to see.

]]>
<![CDATA[HTTP Crashes and Multithread gets]]> https://pulsedmedia.com/clients/index.php/announcements/102 https://pulsedmedia.com/clients/index.php/announcements/102 Tue, 17 May 2011 12:50:00 +0000 HTTP Crashes and Multithread gets

The crashes of HTTP has been identified now reliable enough.

Time and time again when the logs are being inspected the reason found is because people are using multi-threading GET via PHP, ie. via Ajax Filemanager downloading large files with multiple threads.

We are now looking to implement on next update a server wide per IP connection limit, lifting the PHP connection limits just above this. Unfortunately, it will also mean that FTP connections will be limited lower than now, or to modify the Ajax filemanager to use direct GETs instead of via PHP handled gets.

Every user has direct HTTP access to their data in URI /data, so if your username is johndoe and server is foo, URI johndoe.foo.pulsedmedia.com/data 

I suggest that is used instead of Ajax filemanager download, thus bypassing PHP layer for large data GETs.

Normally PHP processes are automatically restarted by Lighttpd if they crash, so there is somekind of bug in fastcgi layer preventing this from happening because of persistent multiple large GETs. What stuns me however, is that the users who does this don't get alarmed when their connections keep constantly dropping. I would atleast be curious what is going on if this is constantly happening.

We will introduce code changes, or per IP connection limits in an upcoming update to fix this problem. It will mean for those users who try too many connections unability to even access UI for a moment.

 

]]>
<![CDATA[New PMSS version released and under production testing]]> https://pulsedmedia.com/clients/index.php/announcements/101 https://pulsedmedia.com/clients/index.php/announcements/101 Sun, 15 May 2011 13:39:00 +0000 New PMSS version released

New version introduces suspend, unsuspend and lighttpd status checking & restarting functionalities, among few other new features and several bug fixes. This version has now entered public production testing.

Please note that the new features are not fully tested in production yet, so double check and if there are problems, contact support.

Full changelog & update instructions can be viewed at: http://wiki.pulsedmedia.com/index.php/PM_Software_Stack

]]>
<![CDATA[2011 back in stock]]> https://pulsedmedia.com/clients/index.php/announcements/100 https://pulsedmedia.com/clients/index.php/announcements/100 Thu, 12 May 2011 12:28:00 +0000 2011 stock replenished

New servers arrived and are currently being installed, so 2011 is back in stock and maybe ordered. If ordered today, setup might be delayed until 14th, but we should back to well under 24hr schedule by weekend, atleast for as long as stock lasts.

]]>
<![CDATA[Hardware upgrades]]> https://pulsedmedia.com/clients/index.php/announcements/99 https://pulsedmedia.com/clients/index.php/announcements/99 Mon, 09 May 2011 14:58:00 +0000 Hardware upgrades

We are doing a lot of hardware upgrades this month for the 2009+ series, we are targetting to upgrade roughly 20% of our servers to newer, higher performance nodes. This will ensure higher performance for all users along with stabler service.

If you've been long with us, keep your eye on your email, you might get to a new server.

 

]]>
<![CDATA[Shared webhosting services introduced]]> https://pulsedmedia.com/clients/index.php/announcements/98 https://pulsedmedia.com/clients/index.php/announcements/98 Tue, 26 Apr 2011 16:41:00 +0000 Shared webhosting services introduced

We have decided to leverage our knowledge of extremely high value servers with high performance configuration for webhosting as well, and thus introducing new range of shared webhosting.

Our webhosting offers industry standard cPanel control panel for ease of management, and we have targeted the plans to balance between resources and performance. No outrageous claims of unlimited storage or BW, but pure and simple balance between resources and performance. You won't be finding silly marketing claims here. :)

Servers utilize RAID10 to offer a level of redundancy and increased performance, Quad Core CPUs and a minimum of 8Gb of RAM, all of which combined should prove to offer quite a high performance level for an average website's needs.

You can find the offers at http://pulsedmedia.com/webhosting.php

And as is our style, some introductory discounts are available! Coupons valid until end of April.

First 3 months - 75% Discount - Limited uses!
Use coupon code: QUARTERLY-INTRO-WEBHOSTING

FIRST YEAR 10% OFF - Limited uses!
Use coupon code: ANNUAL-INTRO-WEBHOSTING

PERMANENT 5% DISCOUNT - ANY CYCLE - Limited uses!
Use coupon code: ALWAYS-CHEAPER-WEBHOSTING

 

]]>
<![CDATA[DNS issues again: Propagation]]> https://pulsedmedia.com/clients/index.php/announcements/97 https://pulsedmedia.com/clients/index.php/announcements/97 Mon, 18 Apr 2011 11:03:00 +0000 DNS Propagation

Despite tools we use to verify propagation of DNS showing us that update has propagated properly, it infact has not.

At least OpenDNS and Google DNS is showing correct records, and our nameservers are up & running, but some ISPs have opted to use old nameservers, which is causing issues as these old servers are unstable and at this moment down, pending Leaseweb repairing the hardware. 

Some ISPs cache the nameserver entries for a long period of time, some inflict changes in near real time. The temporary solution is to use OpenDNS or Google public DNS. You may also try flushing your DNS cache but this is likely to make no effect.

We are getting reports of DNS issues, while at the sametime we cannot validate these issues ourselves really, until it is too late.

For future, we have arranged clustered DNS, fully redundant. This change should take effect by end of this week, as we are at the same time doing an domain transfer to get this DNS solution.

]]>
<![CDATA[Experimental 2011 servers]]> https://pulsedmedia.com/clients/index.php/announcements/96 https://pulsedmedia.com/clients/index.php/announcements/96 Wed, 13 Apr 2011 11:21:00 +0000 Experimental 2011 servers

We received the 12HDD "monster servers" as we like to call them. And we are trying to get them to perform.

Storage performance is peculiar, it is extremely low for this kind of a server. Sequential read reaches close to 4 cheap, small HDD level, random access reaches barely better than single high performance HDD.

The provider is trying to solve this, but as a backup they have reserved us several smaller servers if the performance of these 12 HDD servers cannot be set to wanted level.

These are HP servers, and hardware raid. We've had similar experience with HP servers in the past, abysmal storage performance, but a 6HDD server outperformed this 12HDD server multiple times over, and random access was actually decent on those. We expected this higher end model to offer no such weak spots. So, let's hope they simply have somekind of configuration issue!

]]>
<![CDATA[DNS issues resolved]]> https://pulsedmedia.com/clients/index.php/announcements/95 https://pulsedmedia.com/clients/index.php/announcements/95 Tue, 12 Apr 2011 11:35:00 +0000 DNS Issues resolved

This was a multitier problem. Bad timing caused that new nameservers were not resolving for some people, and for those who it did resolve for it was no good, because of Debian 6.0 default named config disallows local zone queries from outside hosts.

we are really sorry for this! Everything should get back to normal quite soon.

]]>
<![CDATA[Website downtime - Restoration]]> https://pulsedmedia.com/clients/index.php/announcements/94 https://pulsedmedia.com/clients/index.php/announcements/94 Tue, 12 Apr 2011 03:03:00 +0000 Website downtime and subsequent restoration

Server hosting our site suffered a critical failure last night, causing downtime.

We have restored our website and clients area, among other services. Currently running in a little bit of "stitch & patch" fashion as DNS propagation is being waited out etc.

Client area might have some glitches, so please report them by opening a new ticket. We are in contact with WHMCS support to solve these, but they have been unable to give any other answer so far than to "reinstall". which would naturally mean loosing all customization we have done etc. for potentially no gain.

We are working to ensure downtimes in future will be minimized and hopefully automatically resolved (HA setup).

]]>
<![CDATA[USD Plummeting is affecting badly]]> https://pulsedmedia.com/clients/index.php/announcements/93 https://pulsedmedia.com/clients/index.php/announcements/93 Sun, 10 Apr 2011 16:52:00 +0000 USD Value plummeting is affecting badly

Due to multicurrency being a hindsight in WHMCS and existing service prices not being tied to main currency some people are getting services for over 30% less than expected.

In worst cases this means that service is rendered significantly under cost.

This also causes at minimum to profit margins to drop significantly. a 1% drop in price affects way more than the 1%, depending upon the profit margin. It also means that we cannot lower the price in general because other people have to pay for other people's services, which is not right!

Unfortunately, we have to take action on this, and update service prices for some people paying using USD. This has been a pre-existing condition now for almost an year, but we have so far REFUSED to take action on it, but now the differences can be far too great.

For the most part change will be inflicted only on upcoming invoice, which is not generated yet. On more extreme cases we may change already existing invoice to reflect the USD value fluctuation if due date is atleast 4 days away. On the most extreme cases we may ask customer to pay some of the difference, to allow us to even breakeven for their service. If that extra payment is asked, it is not mandatory and completely up to the customer to choose. Retroactive price increase is not in any fashion ethical, thus it is optional.

If you do not pay in USD, but you are using EUR or AUD this does not affect you at all in any fashion. AUD value has been quite stable over the year with only few % change. EUR is our main currency.

So to put it simply if you are paying in USD:

 - Your next invoice might be for different sum, and if you have a subscription it may need updating
 - If you have an open invoice which is due in 4 or more days, it might be changed and a reminder sent about it.
 - If you renewed within the past month, and the difference was on the extreme side you might be asked to make an voluntary extra payment.

We are regretting having to take this action, but for the greater good we must do this.

EDIT: Only accounts with significant difference was changed, small changes (few %) were not put in effect. If you think your rate should be rechecked, please open a ticket and we will check it. A very small percentage of users got even their prices lowered due to general price decrease/ordered when USD peaked low last summer.

Sorry for the inconvenience caused!
 -Aleksi

]]>
<![CDATA[Experimental 2011 server and 2011 CZ servers]]> https://pulsedmedia.com/clients/index.php/announcements/92 https://pulsedmedia.com/clients/index.php/announcements/92 Sat, 09 Apr 2011 22:16:00 +0000 Experimental 2011 servers

During next week we will start to roll out 2011 to 12HDD, HW RAID10 servers as an trial basis.

Using these servers would allow us to provide redundant storage, very high HDD performance, and if it works on full provisioning on high performance we will be cutting users per HDD to half and will allow us to lower price.

Czech Republic servers

We will be dropping out these servers, and using the above mentioned experimental server instead. The total cost of operation with CZ servers have been ultimately too high, in form of downtime. Network performance was slightly better on average, but way worse in peaks.

Servers itself were slightly faster, but that could not cover for the negatives in other areas.

 

]]>
<![CDATA[Refunds policy. Late payments, account terminations, resetups and eCheques.]]> https://pulsedmedia.com/clients/index.php/announcements/91 https://pulsedmedia.com/clients/index.php/announcements/91 Thu, 07 Apr 2011 13:17:00 +0000 Refunds policy

Our refunds policy can be found at our wiki.

Late payments, Paypal eCheques

Late payment will result in account termination shortly after going overdue. We allow couple days over due for eCheques clearance. eCheque is a Paypal form of transaction where the funds are drawn from the payee's bank account. This kind of transaction happens when there is not sufficient funds in Paypal account, and will cause generally couple business days delay in the payment getting cleared.

Until the payment clears it will not be accounted, nor marked in accounting. It is exactly the same as if there is no payment until it clears. We do not receive the funds until eCheque payment clears.

If your account gets terminated before payment has been received (or cleared), we will resetup your account within a few days. Just open a ticket requesting resetup. The days when service is unavailable will not be compensated, so make certain you make your payments on time.

During termination all data is always removed, and it is not guaranteed you will get back to the same server on resetup.

 

I hope this clears some questions people has been having pertaining to these :)

Best Regards,
 Aleksi

]]>
<![CDATA[Services 2009+ Starter,Communal, all of 2011 available again!]]> https://pulsedmedia.com/clients/index.php/announcements/90 https://pulsedmedia.com/clients/index.php/announcements/90 Sat, 02 Apr 2011 06:53:00 +0000 Service availabiltiy

As probably many of you have been noticing we've been out of stock majorly for the past month. This has been fixed now and all previously unavailable services has been made available again.

Current availability is:
2009+ Starter: 16
2009+ Communal: 5
2011 Large: 10
2011 Medium: 10
2011 Small: 5

We will try to keep up better work on keeping higher availability.

]]>
<![CDATA[New Promotions!]]> https://pulsedmedia.com/clients/index.php/announcements/89 https://pulsedmedia.com/clients/index.php/announcements/89 Tue, 29 Mar 2011 13:22:00 +0000 New Promotions!

Some new and sweet promotions are available now! Check out our specials page!

]]>
<![CDATA[Reaching support]]> https://pulsedmedia.com/clients/index.php/announcements/88 https://pulsedmedia.com/clients/index.php/announcements/88 Sat, 19 Mar 2011 12:23:00 +0000 Reaching support

Support is reachable via support@pulsedmedia.com or client portal, not 3rd party venues. More and more people are contacting via other venues, even posting negative reviews if we do not reply via 3rd party service (ie. a forum).

We cannot fix a problem nor give instructions without making a ticket, and support ought not to be expected if not using a real support venue. IRC is not meant as a support channel, albeit you might be able to receive support from there, but it is not a real support venue.

We have grown fast, and this has sometimes caused delays in support, but we are constantly enhancing our services and support, lately for example in the form of Wiki.

Also when making an ticket, a full comprehensive problem report needs to be done, ie. "this is broken" is not sufficient for deriving a solution without doing a lot of checkups, and that takes time to check every possible cause, or "blanket" solutions needs to be used causing disruption in the service in other ways. If the problem is after logging in you receive blank page, a proper description of the problem would be "After i login to the GUI using a browser i am receiving a blank page since 2 hours ago" as an example. Making a comprehensive problem description helps us to solve the problem way faster.

]]>
<![CDATA[WIKI]]> https://pulsedmedia.com/clients/index.php/announcements/87 https://pulsedmedia.com/clients/index.php/announcements/87 Fri, 11 Mar 2011 12:45:00 +0000 Pulsed Media Wiki

A wiki has been setup at http://wiki.pulsedmedia.com for guides, general help etc.

New content will be written frequently in the wiki, and detailed more specifically. The wiki already contains support which is not available elsewhere, such as the PM Software Stack guides. Please feel free to update articles, or add new ones.

]]>
<![CDATA[New 2011 made available]]> https://pulsedmedia.com/clients/index.php/announcements/86 https://pulsedmedia.com/clients/index.php/announcements/86 Thu, 03 Mar 2011 12:47:00 +0000 New 2011 made available

Current availability is a total of 18 accounts for 2011, made just available. We are looking into increasing availability more during this and next week.

]]>
<![CDATA[New 1Gbps servers coming, from Czech Republic]]> https://pulsedmedia.com/clients/index.php/announcements/85 https://pulsedmedia.com/clients/index.php/announcements/85 Sun, 27 Feb 2011 21:39:00 +0000 New 1Gbps servers coming

Due to the constant unavailability of new servers from Leaseweb we have resorted to a backup plan and made first purchases for Czech Republic based servers.

We expect to get back to decent availability during the upcoming week.

]]>
<![CDATA[Lighttpd crashes and download managers]]> https://pulsedmedia.com/clients/index.php/announcements/84 https://pulsedmedia.com/clients/index.php/announcements/84 Sun, 27 Feb 2011 15:03:00 +0000 Lighttpd crashes and download managers

One common cause for the lighttpd crashes are download managers and individuals who use ridiculous amounts of threads. At the moment this is hard to catch in action, but that tends to cause certain lighttpd crash.

So keep the download manager http threads to something sensible, like 3 concurrent threads maximum.

]]>
<![CDATA[Frequent Lighttpd crashes]]> https://pulsedmedia.com/clients/index.php/announcements/83 https://pulsedmedia.com/clients/index.php/announcements/83 Mon, 21 Feb 2011 12:50:00 +0000 Frequent Lighttpd crashes

It used to be that lighttpd (what servers the web pages for GUI) did not crash - at all. Maybe we were lucky but before december you could almost count on single hand fingers the times lighttpd had crashed on all servers, total overtime.

Nowadays, it for unknown reason seems to be crashing very frequently, in comparison. rTorrent used to be the downtime causer, constantly crashing, now it's lighttpd.

We are going to solve this sooner or later permanently for all servers, but for time being simply send a ticket when that happens.

]]>
<![CDATA[2009+ US location will be discontinued]]> https://pulsedmedia.com/clients/index.php/announcements/82 https://pulsedmedia.com/clients/index.php/announcements/82 Thu, 17 Feb 2011 14:18:00 +0000 2009+ US Location will be discontinued due to low performance attained from the servers in this setting.

The servers will be re-employed in a different setting as they are plenty powerfull, not just suitable for 2009+ US.

 

]]>
<![CDATA[PM Software Stack Debian Squeeze Status: Unstable]]> https://pulsedmedia.com/clients/index.php/announcements/81 https://pulsedmedia.com/clients/index.php/announcements/81 Tue, 15 Feb 2011 01:50:00 +0000 PM Software Stack Debian Squeeze Status: Unstable

After some installation alterations we have confirmed today that for the most part PM Software stack works for Debian Squeeze 64bit, some portions like quota does not compile but otherwise seems to be working in our testing environment.

Please note that this is completely unsupported still, and not recommended for production environments.

Your luck may vary when using our software stack with Debian Squeeze.

 

]]>
<![CDATA[Pulsed Media Software stack update]]> https://pulsedmedia.com/clients/index.php/announcements/80 https://pulsedmedia.com/clients/index.php/announcements/80 Tue, 15 Feb 2011 00:45:00 +0000 Software stack update

The pulsed media software package has been updated.

Some of these updates are updating automatically to earlier installations, but changes requiring elevated privileges (not per user changes), will take effect only upon manual update.

Updates are:

 - New master gui base code
 - Less tabs
 - Updated welcome page
 - ajaXplorer to replace phpXplorer
 - Numerous little fixes on the backend
 - Runtime configuration for the current server
 - Framework on-server user db (goal towards local idempotency)
 - Framework for elevated privilege upgrades

Capabilities for master gui which exists on the backend already are automatically updated. For fully updated system, please reinstall server and run install.sh, or look for package difference. New system packages only installed, so that route you can manually updated to newest.

First steps to simplify the codebase has been taken in this release.

New server installation instructions, for Debian 64 Lenny:

Logged in as root:
wget http://pulsedmedia.com/remote/install.sh
bash install.sh

]]>
<![CDATA[Delivery delayments]]> https://pulsedmedia.com/clients/index.php/announcements/79 https://pulsedmedia.com/clients/index.php/announcements/79 Fri, 28 Jan 2011 19:50:00 +0000 Delivery delayments

None of the DCs we use were able to supply us hardware for the weekend. From one of the DCs we've been waiting for new hardware for 2 weeks now!

This means that 2009+ and 2011 deliveries will be minimal to none during the weekend. 100% Catchup expected for wednesday. 2009+ X2 and VPS deliveries are normal during the weekend.

All waiting will be compensated to all of those orders pending.

We are really sorry about this force majeure!

 

]]>
<![CDATA[VPS deliveries slightly delayed]]> https://pulsedmedia.com/clients/index.php/announcements/78 https://pulsedmedia.com/clients/index.php/announcements/78 Wed, 26 Jan 2011 07:28:00 +0000 VPS Deliveries delayed until end of week

We are waiting for some provisioning from the DC for new deliveries of VPS services, due to volume of orders. Should be back to <12hrs regular setup for VPS by end of week.

]]>
<![CDATA[Remember to submit tickets early and check your quota!]]> https://pulsedmedia.com/clients/index.php/announcements/77 https://pulsedmedia.com/clients/index.php/announcements/77 Tue, 25 Jan 2011 06:50:00 +0000 Submit a ticket early!

Recently there has been several cases of complaints for non-working service for even up to a month!

In all of these cases no ticket was submitted! If there's no ticket we cannot fix it for you, nor can we confirm an error existing. Submitting a ticket for any error is crucially important, as without an ticket it's a limited scope of problems we can efficiently see from our monitoring and generally per user issues are what we do not see easily from our monitoring.

Check your quotas

rTorrent not running/starting? Getting empty ruTorrent? ruTorrent saying bad link to rTorrent?

Check your quota in that case! bottom left corner on ruTorrent or info tab for more specific quota information. When you go over your quota rTorrent won't start.

]]>
<![CDATA[Contact information]]> https://pulsedmedia.com/clients/index.php/announcements/76 https://pulsedmedia.com/clients/index.php/announcements/76 Sat, 22 Jan 2011 17:27:00 +0000 Contact information

This is just a simple reminder that the contact information you give us has to be real and identifiable. By Finnish laws we are required to have real contact information, and able to identify customers with certainty.

So, please make certain that they are correct on the file.

Thanks.

]]>
<![CDATA[Upgrade/Downgrade, server move data transfers]]> https://pulsedmedia.com/clients/index.php/announcements/75 https://pulsedmedia.com/clients/index.php/announcements/75 Wed, 19 Jan 2011 05:53:00 +0000 Data transfers between servers

Due to the process being administratively quite an burden to move data between servers, and need to have active accounts on both servers during the period etc. we have decided to change our standard procedure.

From now on, data transfer is NOT automatically done between different servers on upgrade/downgrade or server changes. Instead all data transfer procedures will be charged as an administrative ticket of 3€.

Upgrade/Downgrade within Small - Large plans however many times do not require server transfer. Also upgrades for extra HDD Quota/RAM most of the time does not require server transfer neither.

Data will be transferred free of charge when the reason is that we need to load balance servers or are upgrading you to a better server due to administrative reasons.

]]>
<![CDATA[January Madness!]]> https://pulsedmedia.com/clients/index.php/announcements/74 https://pulsedmedia.com/clients/index.php/announcements/74 Wed, 12 Jan 2011 02:45:00 +0000 January Madness Promotion!

You can get a seedbox for just 4€ per month! whaaaat?

Yeap! You read it right! 2009+ Starter on annual subscription for just 4€ per month! That is immense!

70Gb HDD, 250Mb rTorrent Ram!

Check out more details and then make an order!

Use coupon code: january-madness   for the discount!

]]>
<![CDATA[Happy new year!]]> https://pulsedmedia.com/clients/index.php/announcements/73 https://pulsedmedia.com/clients/index.php/announcements/73 Sat, 01 Jan 2011 18:18:00 +0000 Happy new year!

Happy and magnificent new year 2011 to everyone! We here at Pulsed Media are really excited about what year 2011 brings!

We hope your year is starting happily and excitingly as well!

To celebrate the starting year we are offering a nice little promotion!

3€ off for any seedbox on monthly or quarterly payment, permanently! Valid until 15th of Jan/50 maximum. Coupon code is: newyear2011

]]>
<![CDATA[Single server Seedbox management]]> https://pulsedmedia.com/clients/index.php/announcements/72 https://pulsedmedia.com/clients/index.php/announcements/72 Mon, 27 Dec 2010 03:36:00 +0000 Single server seedbox management!

We have decided to release package to manage single server seedbox server, our internal tool for setting up users and servers.

This is only for a single server, and contains some known bugs. It includes the master gui and everything!

The installation script installs:

 

  • All necessary debian packages
  • Custom "quota" package (compiled from source)
  • Custom rTorrent/libTorrent version, compiled from source. (Changed to actually compile, from SVN)
  • Management scripts to /scripts
  • Configuraiton files to /etc/seedbox
  • SKEL to /etc/skel for Master GUI + ruTorrent etc.
  • Custom plugins for ruTorrent

 

It is exactly the same package as we use internally to deploy new servers, and management scripts are the same we use daily. But this package is only for single server (one server at a time management).

Known bugs include at least the following:

  • Should not install "quota" from source package under OpenVZ
  • userTransfer script arguments wrong (chg $args[4] to $args[3] for param list)
  • No documentation what-so-ever (Internal tool)
  • Current package is "untested", meaning changes were made which SHOULD work but have not been tested to the package
  • not all apt-get questions are bypassed (proftpd)
  • Not everything is 100% automatic (quota setup, munin config hostname has to be manually typed etc.)
  • Does not install central management package
  • No idempotency on server side (ie. cannot regenerate configuration/automate removal easily as data of user resource limits is not saved locally)
  • Does not setup automatically root crontab
  • Some of rTorrent operational scripts should be removed/refactored (scripts making sure it stays running)
  • Does not get installation failures (extremely rare in our usage tho)
  • No account automatic removal tools included
  • No DNS setup integration
  • No versioning, upon updates need to know if an package needs manual upgrade
  • addUser scripts asks password as parameter, potential security risk
  • Works only with DNS naming scheme of *USERNAME*.*SERVER*.*DOMAIN_NAME*
  • All users need FQDN
Use at your own risk, no guarantees or warranties given! Licensed under Creative Commons BY-SA 3.0


Creative Commons License
Pulsed Media Seedbox Management by Pulsed Media is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Based on a work at PulsedMedia.com.

There are other bugs too, but if you use this, just remember this was meant for our internal use only, on very specific range of hardware and software. Tested only under Debian Lenny 64bit on pure hardware and under OpenVZ, KVM.

Installation & Usage

Installation is very easy really.

wget http://PulsedMedia.com/remote/install.sh
bash install.sh
answer questions/set config

after installation, setup user:
cd /scripts
./addUser.php USERNAME USERPASSWORD RTORRENT_RAM_LIMIT QUOTA_LIMIT

change user resource limits
cd /scripts/util
./userConfig.php USERNAME RTORRENT_RAM_LIMIT QUOTA_LIMIT

Check all rTorrent instances are running
/scripts/checkInstances.php

Setup custom rTorrent template
wget http://pulsedmedia.com/remote/config/rtorrentTemplate.txt -O /etc/seedbox/config/templates.rtorrentrc
vim /etc/seedbox/config/templates.rtorrentrc

That is it really :)

Remember to set /etc/hostname correctly, as that's used for the lighttpd virtual hosts.

Questions, Bug Reports

This does not include free support. But you may send us general questions about the package or bug reports. If you do not have client portal account you may use e-mail sales@pulsedmedia.com to send in your questions.

There are absolutely no warranties for this package, and we do not provide end user support. However, good ideas to enhance the package are more than welcome, and will be considered.

]]>
<![CDATA[DNS issues resolved]]> https://pulsedmedia.com/clients/index.php/announcements/71 https://pulsedmedia.com/clients/index.php/announcements/71 Fri, 17 Dec 2010 11:17:00 +0000 DNS issues has been resolved

Due to .FI domain registry authorities refusing to put up glue records for .FI domains (claiming they do not exist), we have a setup of multiple levels of nameservers. Most of this is hosted at really expensive prices for nameserving, but causes a pyramid situation when there is lots of useless parts that can fail.

This time a domain expired without renewal notices, thanks to a registrar which system's tend to bug. This has been resolved now, and DNS should refresh within several hours and get back to working condition.

I have again contacted the .FI domain registry pushing to get the glue records finally inserted after years of requests. Ficora, the domain authority for .FI has always been a total pain to work with, being overtly bureaucratic. For example, every single domain has their own login details to their system, on top of that separately ISP login details etc and these login details have normal login, plus separate authentication sheet mailed separately. Most people tend to loose those.

In any case, situation of periodic DNS resolution failures should resolve within the next 24hrs.

Sorry for the inconvenience, steps are taken to ensure this does not reoccur and will mean ultimately dropping every attempt to have DNS served from .FI domains if necessary.

P.S. If you have the technical know-how to change nameservers, you can use 95.211.1.20 which is the pulsedmedia primary nameserver. Again, we expect it to be resolved within 24 hours however so if it isn't urgent you can just wait and not make any changes.

]]>
<![CDATA[The Humble Bundle 2 is here! :)]]> https://pulsedmedia.com/clients/index.php/announcements/70 https://pulsedmedia.com/clients/index.php/announcements/70 Wed, 15 Dec 2010 11:53:00 +0000 The Humble Bundle 2 is here!

The Humble Bundle is a collection of Indie games, for pay what you want plus donations to EFF and Child's play options. The bundle contains games Braid, Cortex Command, Machinarium, Osmos and Revenge of the Titans.

You pay for what you want, and can donate at the go for EFF & Child's play as well! Now tell me how cool is that?

Watch the video, it's quite cool as well! :)

We do not normally post 3rd party news, but here at Pulsed Media we think the indie game developers and charities involved deserve our utmost respect. Besides, who could resist actually innovative, DRM free games? ;)

]]>
<![CDATA[Some 2011 HW arrived]]> https://pulsedmedia.com/clients/index.php/announcements/69 https://pulsedmedia.com/clients/index.php/announcements/69 Fri, 10 Dec 2010 16:22:00 +0000 Some 2011 HW Arrived

Some of the servers arrived today, that means we can setup some of the orders already rolled in and continue with the CSB plan migrations.

We expect by wednesday new servers to arrive.

]]>
<![CDATA[2011, EG09 BestOf: Deliveries delayed. CSB Plan migrations]]> https://pulsedmedia.com/clients/index.php/announcements/68 https://pulsedmedia.com/clients/index.php/announcements/68 Thu, 09 Dec 2010 10:44:00 +0000 2011, EG09 BestOf: Deliveries delayed

Our upstream provider is having supply woes. They are late with new hardware badly now, which translated to the fact that 2011 and EG09 BestOf deliveries are delayed!

2011 sales has been magnificent and even more hardware is needed, and we are now looking beyond the current portfolio of providers to get servers. One such is our US Based servers, where we could have redundancy.

Because new hardware is not arriving on time, we cannot finish CSB plans migration to free up the EG09s, their delivery to new owners is as well delayed. Sucks, i know.

Possibility of US Based 2011

If you have interest in 2011 service at US, please open a ticket and we will weight in the options.

The plus side at US servers would be: Redundant storage, less users per HDD. The downsides are: Network maxes out generally at around 40-50M/s, single thread network performance is not that great.

CSB Plan migrations

We will be moving users between the EG09s to free up some of the servers, this is a temporary move to consolidate some of the servers, and get them free'd up quicker.

]]>
<![CDATA[C-500Gs on the cheap!]]> https://pulsedmedia.com/clients/index.php/announcements/67 https://pulsedmedia.com/clients/index.php/announcements/67 Tue, 07 Dec 2010 02:07:00 +0000 Same thing as with the EG-09 BestOfs!

6 has been made available right now, as we renew our hardware to newer models more might become available.

Owner transfer 25€ and Dedi service 35-40€ depending upon the cycle.

 

These are:

Celeron D215/220
2Gb RAM
500Gb HDD
3Tb 100mbps, then 10Mbps

Owner transfer order here!

As a dedi service!

]]>
<![CDATA[New US servers entering limited production!]]> https://pulsedmedia.com/clients/index.php/announcements/66 https://pulsedmedia.com/clients/index.php/announcements/66 Mon, 06 Dec 2010 15:25:00 +0000 New US servers entering limited production!

New US servers are going to enter into limited production during this week. After week's trying to get all of it to work, i think we got these stable now! :)

The problem lied in the fact that these are bigger servers, with 1Gbps uplinks, we have to divide them using virtualization, but full virtualization is too heavy, so we have to rely on paravirtualization, as in OpenVZ.

They gave us extra IPs from another subnet, which meant that OpenVZ does not work. OpenVZ does not support by standard using IPs from separate subnet. In the end it's an easy fix, just change vz.conf setting NEIGHBOURHOOD_DEVS from detect to all. But finding that solution, and pinpointing it to that took a lot of time, and tons of blaming game between a management company and the server provider.

Then we changed from HyperVM to Proxmox as HyperVM is too much for these servers, we simply don't need all those features etc. Well, what i wasn't told, or pretty much anyone for that matter, is that Proxmox OpenVZ second level quotas work reaaally bizarrely. You got to kind of take the second level quotas "at face-value" and trust the files actually are there.

The thing is that you for some reason cannot see the aquota files, so be damned certain you don't compile quota from sources, and that you have backed up these symbolic links, as that if those links disappear, there's not a chance to restore as you need to know the full path to the files. Wouldn't be a problem if you'd see the damn files :O

Another thing still unknown is that will the quota grace periods work, we shall see.

Anyways, they work, are fast and stable in our testing, so we are entering them to limited production.

Meaning there is a set capacity - and no more. We will not immediately purchase new servers, we will run with these servers and see how it goes before committing to new servers. We are very hopeful tho, and it looks like these will perform better than the French servers! :)

Want to move to US server?

Just open a ticket, and we'll transfer you right over! :)

]]>
<![CDATA[EG-09 BestOfs on the cheap]]> https://pulsedmedia.com/clients/index.php/announcements/65 https://pulsedmedia.com/clients/index.php/announcements/65 Fri, 03 Dec 2010 14:44:00 +0000 We are moving away from our OVH EG-09 BestOf servers. Currently 8 are up for grabs on the cheap! :)

These will include full "OVH Ownership transfer", meaning we set administrative, billing etc. contact details to you, and forfeit full access @ OVH Manager to you.

These servers may or may not contain days remaining, so you will need to be ready to renew them immediately upon transfer.

More might come up a bit later.

Price is 25€ + VAT 23% if applicable. Get your EG-09 BestOf now!

Alternatively, if you cannot purchase from OVH normally, we have tremendously lowered the dedicated server pricing for these. The price per month is 90€ to 95€ depending upon billing cycle (upto biannual).

Get your EG-09 BestOf as dedicated service now! :)

 

Oh, why you ask?

Because the 1Gbps CSB offerings are being moved to way better Leaseweb servers, and enough BW to be practically unmetered :) So these servers have now become useless for us.

]]>
<![CDATA[Testing new servers in US & NL.]]> https://pulsedmedia.com/clients/index.php/announcements/64 https://pulsedmedia.com/clients/index.php/announcements/64 Fri, 26 Nov 2010 13:35:00 +0000 Testing new servers in US & NL.

Some of you know that we've been testing new servers at US & NL lately, in order to replace the CSB 1Gbps plans, and make them official part of PM offerings under PM branding and software.

We've gotten far in the testing, and dug really deep. Results are curious to say the least in some regards.

NL will be provisioned as 1Gbps replacements, now with Leaseweb servers, but in future also with other providers. They are capable of sustaining good speeds, and now that we have debugged the hardware (Note: Do not buy into HPs servers, they ultimately suck at twice to triple the price of "cheapo weak", and still uses the same "cheapo weak" HW for the most part) we can start phasing those in.

In NL, the servers come with 100Tb traffic limit in+out, you know Leaseweb is a bit like that, they like to charge also for inbound traffic. 2nd provider we will be using at somepoint in future at NL does not charge for inbound traffic :) The 2nd provider does cost quite a bit more upfront tho (3x price tag on day 0, while monthly remains the same).

These servers are big so we will be provisioning a lot more accounts per server, but the difference in performance does not exist as the HDDs are way more capable with double IO operations per second compared to current CSB 1Gbps servers. With the better network and traffic allowance these servers will do a lot more traffic per user than before, expecting to double up! Infact, these servers should be for all practical means be unmetered, we do not expect full allowment to be used.

Of course, the production will reveal the real capability of these servers, but those are our assumptions and if these will not serve the users well we will of course go back to current servers or do something else to bump up the quality.

We will also revamp the 1Gbps services to lower users per server quantity in future, by going to offer only a bit larger plans.

 

US Servers

The US 1Gbps servers do not work as 1Gbps at all. We managed only to achieve ~40M/s sustained speed, therefore no 1Gbps unmetered plans in US. These are truly unmetered servers, but the network speed also shows this with low sustained and burst speeds.

Our assumption is that users, meaning you, would not be satisfied with that low peaks on 1Gbps.

But it's not all bad, despite the intended 100Mbps servers not being available anymore at US, we will be using these to deliver 2009+ 100Mbps services, with actually a bit more than 100Mbps allocated per current server replacement of resources. We have to provision multiple servers worth on these servers via virtualization, but the end result should still perform as expected, and you should receive same performance level as currently, with the difference that peaks are higher.

As with the NL servers, if these do not serve you well, we will abandon these servers and go back solely to our current french servers.

As you probably know the US 2009+ plans came available and then abruptly became unavailable as the provider led us to believe the offer was not a one time deal but something we could get a lot of, so we were unable to acquire new servers.

These servers being bigger with more BW, larger faster HDDs etc. should work just nicely for US 100Mbps services and we can begin rolling out US 2009+ subscriptions again during next week.

 

To celebrate this

We will be soon updating the specials page with some really neat offerings! ;) Expect the page to be updated by next wednesday.

Not only that but we are planning a budget VPS based on the new US servers as well :)

]]>
<![CDATA[US Server availability limited]]> https://pulsedmedia.com/clients/index.php/announcements/63 https://pulsedmedia.com/clients/index.php/announcements/63 Fri, 19 Nov 2010 05:59:00 +0000 2009+ US Server availability

The availability for rTorrent seedboxes at US for 100Mbps is very limited right now, we hope to get some more stock within couple of weeks, but until then they are currently completely full and no new servers available from provider.

 

 

]]>
<![CDATA[France <> Netherlands route outakes]]> https://pulsedmedia.com/clients/index.php/announcements/62 https://pulsedmedia.com/clients/index.php/announcements/62 Tue, 16 Nov 2010 17:16:00 +0000 France <> Netherlands route outakes

There is some issues with routes between France and Netherlands servers right now, which causes issues with master gui.

We are hopeful these will resolve in matter of hours, and are taking steps to ensure this problem will not raise in future during the upcoming weeks.

]]>
<![CDATA[US servers back into production this weekend]]> https://pulsedmedia.com/clients/index.php/announcements/61 https://pulsedmedia.com/clients/index.php/announcements/61 Fri, 05 Nov 2010 07:10:00 +0000 US servers back into production this weekend

The initial launch of US servers were a disappointment, as several users contested. With just 1/4th provisioned the IOWAIT was simply too high.

Now we believe this has been fixed, and the reason was due to SoftRaid level 5's BAD performance. We changed to RAID10, which means there's 50% redundancy level on the initial servers. This is not the norm, not all US servers will have RAID10, but the first servers had double sized HDDs which makes this possible.

If this isn't a good solution, then we think we can only offer higher priced US offerings in the future, to work with less users per HDD.

However, in our testing even with RAID5 performed quite nicely, with IOWAIT of 5-7% using 8M/s downstream and 6M/s upstream. We had prolonged periods of 11M/s downstream and 10+M/s upstream with these too.

The 8/6M result was achieved by using "hard to seed torrents". Network stability isn't quite as good as we are used to in EU, but the network problems seems to solve themselves out within minutes every single time.

Due to the cost restraints not better can be done in the US really. The cost point and end user price is so damn low that getting 100Mbps unmetered for much cheaper is really impossible! US servers also comes with the added benefit that there's no low prio transit or network shaping used to offer 100Mbps unmetered, it's just plain unaltered bandwidth. Our French servers have QoS utilized to prioritize different types of traffic (thus often lower FTP transfer speeds), and premium transit link capacity, but these do not have QoS or prioritization being utilized. Plain raw capacity.

During this weekend we are entering it back to production, and those wishing to move, may do so by informing via a support ticket.

We are doing a lot of load balancing these days, and upcoming 2 weeks within the servers, we are abandoning some weak providers and weak servers, in an effort to consolidate to a smaller set of different servers, and overall lower quantity of individual servers with better hardware, in order to enhance service level and to simplify management.

How to get to US server?

Open a support ticket by e-mailing to support@pulsedmedia.com or by logging into client portal.

Other transfers

We are also doing other transfers, if you fall into this category, you will be notified via a new support ticket. There is a few servers from where you get by default transferred to US servers but you can opt out. (Already notified to users on those servers)

]]>
<![CDATA[Blank Master GUI?]]> https://pulsedmedia.com/clients/index.php/announcements/60 https://pulsedmedia.com/clients/index.php/announcements/60 Tue, 02 Nov 2010 13:28:00 +0000 Blank Master GUI?

If your master GUI is blank, you are over quota and therefore automatic update failed. Clear some files, and contact support to get it restored. Future versions will include better display of quota and error handling towards being overquota.

]]>
<![CDATA[Master GUI performance fix]]> https://pulsedmedia.com/clients/index.php/announcements/59 https://pulsedmedia.com/clients/index.php/announcements/59 Sun, 31 Oct 2010 19:29:00 +0000 Master GUI performance fix

During the last update some bad code slipped into the release. This has now been fixed and fixes are part instantly rolled out and partially within the next couple of hours.

Sorry for the hassle and usability inconvenience. You should notice immediately decreased CPU usage during using the master gui.

]]>
<![CDATA[Master GUI update, backend update!]]> https://pulsedmedia.com/clients/index.php/announcements/58 https://pulsedmedia.com/clients/index.php/announcements/58 Mon, 25 Oct 2010 22:45:00 +0000 Master GUI has been updated again!

Today new master gui rolled out with many background enhancements and new looks. Many of these updates were backend wise to ease management, but most visible and biggest is the new looks, and more welcoming initial screen.

We also worked to make the GUI even faster than before! It's already probably the fastest in the industry, but it is even more so by "playing" on the capabilities of browsers and how they function we should have reduced the initial load time significantly from before.

We aim to continue this trend and our next major milestone is making ruTorrent load faster, as you know it's quite slow to load up due to insane amount of different files in ruTorrent.

This update has been rolled out automatically for all users, just refresh your browser window to get the new looks!

 

Backend updates

Managing a mass of servers is no trivial task in itself, and we've been working to make it easier for us. Today we rolled out new backend management utilities which should make it easier for us to work efficiently, meaning we can get more done in a given amount of time.

]]>
<![CDATA[Managing a lot of servers]]> https://pulsedmedia.com/clients/index.php/announcements/57 https://pulsedmedia.com/clients/index.php/announcements/57 Fri, 22 Oct 2010 05:20:00 +0000 Managing a lot of servers

It's less trivial than one would think, they are just plain servers, right?!

While that is true, they are plain servers, but when you keep growing, configurations will slightly differ, such as package version, backend software versions, configurations.

And when a software with no backward compatibility such as rTorrent is introduced, it just makes things that much harder.

So to maintain easily a lot of servers in a streamlined machine, we've built quite a few scripts ourselves to do that, ranging from on server management, to management node scripts, to monitoring, to automating software stack.

One of the things is, how to maitain 40+ SSH sessions concurrently you need infrequently, but still so frequently it annoys to close and restart them? How to manage passwords, or to use keyless logins? How to distribute login certificates? How to distribute software?

Small things become non-trivial when you have enough mass.

Things like slight changes to FTP config changes. It's no fun to login manually to say 60 servers.

Or when you get 40 orders in a single day and how to setup all those accounts, is another question all together!

However, just being proactive about it and making progress to streamline processes every and single day yields results.

Just today, we solved several problems before even users on the servers noticed that, simply because we've built tools to manage servers we could notice things you wouldn't ordinarily notice.

Another added benefit is performance characteristics, we know how things change, due to the mass network metrics actually yield relevance! With single server, you make an change hoping for better performance characteristics, you simply do not know what was the end result because it's just a single server, and due to nature of internet. Torrents don't transfer at stable speed, they bounce up and down. Only way to see effect is when the effect is cumulative from tens of servers.

Another thing we did to make things faster is build our own shell environment to make it fast to connect servers, run a few commands. It's nothing fancy, just some autocompletions built into the shell environment, some visual changes, some aliases etc. to speed up things.

One of the things is task abstraction and separation into smaller pieces. Smaller the pieces are, easier the pieces are to maintain.

It's also important to test. For this we operate lots of local VMs which have separate versions, or software stacks to see how they interoperate and how it works etc.

So at times when you feel like why setup is taking so long, or why we cannot support your special request, it's simply because we have a large mass to manage, and we need to have streamlined operations. We cannot maintain N*N different configurations, it's unmaintainable and we would sufficate under the administrative overhead, leading that we would need to hire additional support people. So, when you request something trivial sounding like "Can i have public.MYUSERNAME.pulsedmedia.com so i can share files?", to make that happen infact requires us to add that support for every user, on every server in an autonomous way.

Otherwise we'd end up with yet another custom configuration, which is hard to maintain. Nevermind how to account for the custom configurations, so we don't accidentally wipe them out?

Therefore, we can only implement most requested features. Not every single feature, or custom configuration.

So next time you think "why can't they support feature XYZ?", it's infact because we need to make sure it doesn't have negative impact, we need to give it to everyone, to every single server etc.

We operate on new servers a newer customized rTorrent, and this has been HUGE headache as of late, infact, it has caused most of setup errors etc. Why you may ask? Config is not 100% same on each server. We didn't envision a situation where rTorrent configuration backwards compatibility is broken, so we are currently supporting 2 different versions of rTorrent. It's almost as bad as supporting 2 different torrent clients all together, and sometimes we simply forget which is what.

That's why we are constantly developing in the background, and making things better! You may think "oh nothing has happened in a while", while in the backend we might have changed all together how we operate things.

We changed our management style just a month ago quite radically, it's not a visible change, but it yielded immense time savings. Now we are under progress to streamline this further to save even more time.

]]>
<![CDATA[Affiliate program: Higher signup bonus & shorter delay]]> https://pulsedmedia.com/clients/index.php/announcements/56 https://pulsedmedia.com/clients/index.php/announcements/56 Tue, 19 Oct 2010 16:11:00 +0000 Affiliate program: Higher signup bonus & shorter delay

Signup bonus to affiliate program has been raised to 10€ and delay for commission payment has been lowered to 15 days from 40 days.

To take advantage of this now, login to client portal and goto "Affiliate" tab. Post your affiliate link to your blog, website, give it to friends etc.!

]]>
<![CDATA[Master GUI updates]]> https://pulsedmedia.com/clients/index.php/announcements/54 https://pulsedmedia.com/clients/index.php/announcements/54 Wed, 13 Oct 2010 16:14:00 +0000 Master GUI updates

For some users we have now rolled out updated master gui, which concentrates on ruTorrent enhancements and stability.

The most visible updates are in plugins, irrelevant plugins which cause only confusion has been removed, and a new quota based plugin has been introduced.

Under testing is a method to ensure minimal rTorrent downtime, fully automated restarts which will inspect processes and decide from there to restart or not. Feedback is appreciated.

We will continue to slowly distribute the update, and we are already hard at work for next major release.

]]>
<![CDATA[2009+ Starter: 70Gb HDD! Addon pricing reduced!]]> https://pulsedmedia.com/clients/index.php/announcements/53 https://pulsedmedia.com/clients/index.php/announcements/53 Mon, 11 Oct 2010 21:49:00 +0000 2009+ Starter: Now with 70Gb HDD!

2009+ Starter got a MAJOR boost today with almost doubling the HDD quota! New orders of 2009+ Starter will now come with 70Gb of HDD!

Not only that but the entry price for 2009+ starter got lowered as well! Starting now from 5.83€ per month for annual subscriptions!

Addon pricing reduced! :D

Addon pricing for 2009+ SBs got reduced majorly too, just ~1.66eur/mo per 50Gb extra HDD, and ~1.58eur/mo per 250Mb extra ram limit for rTorrent when paid annually!

Old 2009+ Starter to New?

Old accounts will not automatically be upgraded, if you want to upgrade to new 2009+ Starter, then contact support and we will move you when new servers come online. However, there's a caveat: We cannot transfer your data from the old to new one, and the new ones have rTorrent max limit of 20Mbps to ensure fair share quality for all users.

All new orders will be the new 2009+ Starter.

]]>
<![CDATA[Slightly behind 2009+ setups]]> https://pulsedmedia.com/clients/index.php/announcements/50 https://pulsedmedia.com/clients/index.php/announcements/50 Mon, 04 Oct 2010 11:18:00 +0000 Slightly behind 2009+ setups

Due to influx of orders we are slightly behind of 2009+ service setups. We should be back within 24hr setups schedule by thursday.

Sorry for the inconvenience.

]]>
<![CDATA[Premium VPS US in production and available!]]> https://pulsedmedia.com/clients/index.php/announcements/49 https://pulsedmedia.com/clients/index.php/announcements/49 Fri, 01 Oct 2010 19:45:00 +0000 Premium Virtual Private Servers are now available!

The US based VPS is now ready and in production. First deliveries has been done and so far it's been excellent!

The performance and reliability is very good, even the most minimal config we offer beats more expensive dedicated options in terms of performance!

The servers use mirrored storage for redundancy, and locates in higher quality DC in Miami, Florida.

The starting price is JUST 9.95€ per month, and first term 25% discount with coupon vpsintro !

For 9.95€ you get 512Mb Ram, 80Gb HDD, 100Mbps link and 500Gb transfer, with 1 CPU.

Configurables are available, and same discount applies them to aswell, you may opt for 4 CPUs, 5½Gb Ram and 700Gb storage with 11Tb transfer if you want to (VPS Large, fully upgraded).

So do check them out now at http://pulsedmedia.com/vps.php !

]]>
<![CDATA[VPS changes]]> https://pulsedmedia.com/clients/index.php/announcements/48 https://pulsedmedia.com/clients/index.php/announcements/48 Fri, 24 Sept 2010 11:20:00 +0000 VPS changes

Due to provider woes, the VPS offerings are to be changed. Preorders will be delivered as promised, with promised specs.

We were originally about to go with 100Tb/Midphase, with a custom server and deal. Until it was time for order, replies stopped from Midphase all together. After weeks of trying to order, finally resorting to contacting 100Tb manager directly via a mutual friend we received reply "We cannot do that", and shortly after the sales rep also replied that cannot do unless we opt for 700$+ servers (with 12HDDs).

Well, the BW ratio to cost would not have worked, and those are way too expensive servers as first VPS nodes. 100Tb/Midphase essentially tried to sell us something different than promised. If i wouldn't have been particulate constantly about link speed & bw, they would have sold us half the link speed and BW promised under the pricing, WITH a premium to a public offering.

Needless to say, we aren't going to use 100Tb. Lying is not a way to get customers, essentially customers like us who will likely get quite a few nodes overtime.

Instead, we opted for a private DC located in Miami. The good news are: Dual L5320 CPUs and 16Gb RAM per node. 100Tb offered 4Gb ram and Xeon 3520 CPUs, so there is roughly 4 times the CPU power and 4 times the RAM. Along with larger HDDs (2Tb). Downside is no BW amounts like 100Tb, far from it.

So the VPS offering is going to be a premium service, meaning mirrored, redundant disk array, and targeted for regular VPS usage, such as webhosting. Otherwise it will be the same but less BW, more CPU & RAM. Plans will be changed shortly to reflect this change.

Availability is VERY limited as well, nodes are added sparsely and rarely, as we buy outright part of the hardware and it's a limited availability offering.

The preorder users are very lucky, as we are essentially giving these VPSs way under cost to them. We expect the first VPS node come online and ready to production by Wednesday.

]]>
<![CDATA[Configurable options]]> https://pulsedmedia.com/clients/index.php/announcements/47 https://pulsedmedia.com/clients/index.php/announcements/47 Thu, 23 Sept 2010 17:54:00 +0000 Configurable options

Configurable options for 2009+ series is now available. You can choose to upgrade RAM or HDD quota by upto 1500Mb extra ram and upto 500Gb extra HDD quota.

Price is lower for longer payment cycles. This makes a great configurable plan if you need larger RAM or HDD than the default offering is.

RAM also affects the minimum/maximum peers and upload slots for rTorrent instances. More ram, more upload slots & connected peers.

]]>
<![CDATA[Growth pains.]]> https://pulsedmedia.com/clients/index.php/announcements/46 https://pulsedmedia.com/clients/index.php/announcements/46 Sun, 19 Sept 2010 01:57:00 +0000 Growth pains

We've all heard the ol' saying of "What doesn't kill you, makes you stronger", and that's probably doubly true for business growth pains.

During the past couple weeks people have probably noticed the fluctuating setup and support ticket response times. This has been because we are overwhelmed with work. Yes, really. And it's all related to the immense growth, even the CSB merger has not caused much, it's been progressing steadily on the background, and been a marginal amount of work, as Josh keeps on helping with the merger, and early management of CSB merged into PM.

But the overwhelming part comes from the immense natural growth we've been experiencing. Infact, we had to even cut our marketing expenditure to slow growth down. We were literally getting too many orders in, all too fast.

At the fastest, the server count has been growing 15% per week during the past 2months. Yes, 15% per week. Thankfully, i (Aleksi) were prepared to this early on with 95% automated setup scripts.

But it's all good, and very positive. The growth pains help us grow, and manage things more efficiently. It's the little things, like how to manage 40+ concurrent SSH sessions on a daily basis? How much spare capacity is required for weekend sales, what are the monthly sales peak dates. With the larger things like "There seems to be a performance issues, but we cannot properly see it". Solution: More detailed monitoring. All PM semidedi/shared servers are being monitored in quite of an detail.

Also, it has been forcing us to solve some seemingly small, but increasingly annoying little things when it occurs more often. Such as the occasional failure of WHMCS extending service due dates. There was no common thread to that one, no error messages to look at. It was a combination of multiple bugs and WHMCS's lack of proper error controlling (which is worrying for an long-lived, widely adapted billing application).

The important areas are highlighted due to the growth, and patterns emerge to be recognized, further business & service development goals become clearer due to reoccurring patterns. It becomes visible just how immature the whole seedbox business infact is, how early in it's lifespan, and where the efficiencies can be found.

Pulsed Media has never been a slow growth or "miniature business" like so many beginning hosting businesses are. We've been from day 0 a very strong growth company, despite the shockinly, absurdly, insanely poor, fraudster provider we tried to use initially. They caused massive damages, it's even hard to calculate how much exactly, but latest estimates are in the 17,500€ vicinity of direct damages alone (not counting refunds and cancellations). But we came out of that stronger, despite every single expectation, we finally managed to clear the backlog of preorders and transfer from the miserable 2010 to 2009+ plan. A lot of that had to be done by getting outside funding myself. Afterall, the 2010 provider got approximately 90% of our funds during the launch which we never got back. Hell, they haven't even provided invoices so far for all the payments made to them. On top of that, we had to refund a lot of money due to inability to deliver. We lost roughly 60% of our customers who preordered. Any other startup would have probably folded over in such a case.

But we came through, we delivered upon our promises, and some more. We keep constantly enhancing our services and the word spreads. Now many of those who asked for refund, or left due to the initial provider woes have returned as well.

Every problem presented is an opportunity.

We will keep enhancing our services strongly in the future, and working very hard to deliver the best possible service, at the lowest possible cost. Every week we go through some studies, experiments or business development ideas to make our services better than ever. Many of the things we do on the background are not visible to the end-user - but rest assured: We are constantly making enhancements.

Customer feedback is one of the most important aspects for us, so please, if you do have feedback, do not hesitate to come to IRC and talk about it, or send an e-mail to support@pulsedmedia.com.

Best Regards,
 Aleksi

]]>
<![CDATA[Scheduled maintenance window]]> https://pulsedmedia.com/clients/index.php/announcements/45 https://pulsedmedia.com/clients/index.php/announcements/45 Thu, 16 Sept 2010 16:05:00 +0000 Scheduled maintenance window

Scheduled maintenance window for seedboxes is from now on 8:00-15:00 GMT during weekdays. Maintenance is rare but if you encounter downtime during this time, check our network status page if your particular server happens to be undergoing scheduled maintenance.

If there's scheduled maintenance going on, it will be reported on the network status page of client portal.

]]>
<![CDATA[Warning of suspension/termination notice e-mails]]> https://pulsedmedia.com/clients/index.php/announcements/44 https://pulsedmedia.com/clients/index.php/announcements/44 Mon, 13 Sept 2010 17:33:00 +0000 Warning of suspension/termination notice e-mails

It's been well known that WHMCS fails every now and then to extend the service due date on service payment, this causes eventually a needless, unnecessary "service suspended" e-mail.

However, none of the services will actually be suspended until we look through the records, and if there's mistakes we will fix them manually.

As of late, this has been way more frequent than before, upto a point where we have to fix 5+ accounts PER DAY. We have been trying to find a resolution to this, and a sum of other WHMCS bugs, but most of our bug reports will simply be deleted by WHMCS staff.

We are really sorry for the inconvenience and hassle caused by this, and we are working to find the correct solutions for this.

]]>
<![CDATA[rtorrent settings]]> https://pulsedmedia.com/clients/index.php/announcements/43 https://pulsedmedia.com/clients/index.php/announcements/43 Mon, 06 Sept 2010 14:33:00 +0000 rtorrent settings

There's been lately a few complains over poor performance on particular servers, and we have found the cause: rTorrent settings. Some users have decided to abuse our good faith and the good faith of the fellow seedbox users by attempting to use more than their fair share, causing complete performance degradation for the server in question.

There's a solid reason why we have chosen the settings we have, and opted to write-protect from users the .rtorrent.rc file. There's still some settings you can change, but for the general part, you are not allowed to make setting changes at all.

Why is it so? you may ask. That's because we've put an effort to make the service good for everyone, not just a single abuser of the server wanting to grab 100% of server resources, or those who think they can make a better configuration for rTorrent.

Approximately 100% of the time a user changes rTorrent's performance related settings the performance is killed for everyone else in the server, causing suboptimal service or simply performance for everyone is ruined.

Want to change rTorrent configuration? Get an dedicated server. A semidedicated service is shared with other people, and users simply have to respect other users privilege to the server resources as well.

Write-protection is not going to go anywhere. It's for everyones protection, including yours and thus everyone gets their fair share.

If you want more performance, you can upgrade to bigger plan, these do not just offer facevalue enhancements, but actually higher bandwidth as well. Generally speaking medium offers 2-3x times the transfer of a small, and large offers 4-6x the transfer of 2009+ small. Of course it also depends on how you use the service, and for some users there's no benefit in upgrading as there's not enough leechers for their torrents to use the extra available bandwidth.

New servers for the past few weeks already have had their .rtorrent.rc files write-protected.

]]>
<![CDATA[US VPS Offerings launch!]]> https://pulsedmedia.com/clients/index.php/announcements/42 https://pulsedmedia.com/clients/index.php/announcements/42 Fri, 27 Aug 2010 18:43:00 +0000 US VPS Offerings launch!

The much anticipated US series of offerings is about to launch with VPS plans!

These plans use HyperVM control panel and OpenVZ virtualization, management of the servers is done by 3rd party company specializing in this type of setup, they will be monitoring the servers 24/7/365, and load balancing them proactively etc.!

These servers come with 2Gbps bandwidth, from which VPSs are limited to smaller chunks, so that everyone gets a good speed even if every VPS is bursting at the same time! Also featuring incredibly high traffic limits, these are WAY higher than you can expect from anything similarly priced offerings in the market.

All this is operated in the SoftLayer datacenters and the incredible Savvis network!

All servers feature fast Quad Core CPUs or better and at the very least 4 HDDs!

Check them out now, and reserve your slot immediately!

]]>
<![CDATA[White label reselling and turnkey dedicated seedbox option]]> https://pulsedmedia.com/clients/index.php/announcements/41 https://pulsedmedia.com/clients/index.php/announcements/41 Fri, 27 Aug 2010 13:46:00 +0000 White label reselling

White label reselling means that there's absolutely no branding of ours involved in the service you offer to your customers, and is the most turnkey solution to running your own seedbox business imaginable.

Our whitelabel reselling is made possible by our "Dedicated server turnkey seedbox" option which comes at a price of 19.95€ and can be added to any of our dedicated servers. This "Pulsed Media Turnkey Seedbox" is also usable for anyone wanting to share their dedicated with their friends, or just want a dedicated seedbox with minimum effort involved. Changing the default UI texts is also free of charge, tabs can be customized etc. You can add new PHP applications etc.

Creating and deleting accounts is free, and includes DNS hosting for your choice of domain. Alternatively you can host DNS yourself. We will help you from start to finish to get things running.

"Turnkey seedbox" service includes our standard master GUI familiar from our semidedicated offerings, you get normal user access and we handle all the server maintenance and management needed. Infact, root access is not even available with this service, but all tasks have to go through our support. This is to ensure level of quality and knowledge how things are arranged, allowing us to offer this service.

Regular maintenance tasks are simply redirected to our support, and we handle it.

On top of this, we offer what is called administrative tickets, this is everything beyond regular support and maintenance. For 3€ per ticket/issue we will add customized software, compile software, add software etc. for you, you never need to worry about getting things to work again. Basic maintenance administrative tickets are free with the turnkey seedbox support service, for other dedicated clients these cost 3€.

If you are interested you can look for the option in your dedicated server options list, or if you got more questions do not hesitate to contact sales@pulsedmedia.com

]]>
<![CDATA[Main website & DNS downtime]]> https://pulsedmedia.com/clients/index.php/announcements/40 https://pulsedmedia.com/clients/index.php/announcements/40 Fri, 27 Aug 2010 12:19:00 +0000 Main website & DNS downtime

There was a short downtime on main website & dns today, services restored to normal around 10:15AM GMT. DNS for some people might be affected a longer time due to ISPs caching not obeying our set TTL times for DNS.

]]>
<![CDATA[2009+ Starter plan orders fixed - Now available in stock]]> https://pulsedmedia.com/clients/index.php/announcements/39 https://pulsedmedia.com/clients/index.php/announcements/39 Tue, 24 Aug 2010 16:11:00 +0000 2009+ Starter plan orders fixed

By an accident we "ran out of stock" within billing system for 2009+ Starter. This disallowed new orders for 2009+ Starter despite we have plenty of room in those servers. This has been now fixed and set to represent real available count.

We hope to remove stock control for 2009+ Starter all together in nearby months, when the demand is deemed to be significant enough to allow constant excess capacity by significant margin.

]]>
<![CDATA[Only 2 remaining for dedicated limited time offer!]]> https://pulsedmedia.com/clients/index.php/announcements/37 https://pulsedmedia.com/clients/index.php/announcements/37 Mon, 16 Aug 2010 12:24:00 +0000 2 servers still available without setup!

Only two more orders will be taken for the 100Mbps unmetered dedicated mini service without setup fee! So if you are considering getting one, now should be the time before we introduce a setup fee for the Mini server as well!

After the next 2 orders setup fee will be 49.95€ for the Mini server.

]]>
<![CDATA[100Mbps Dedicated Servers]]> https://pulsedmedia.com/clients/index.php/announcements/36 https://pulsedmedia.com/clients/index.php/announcements/36 Sat, 14 Aug 2010 12:06:00 +0000 100Mbps unmetered dedicated servers available!

We have released for public ordered our line of 100Mbps unmetered dedicated servers! As an introductory the Mini server without setup fees, an immediate 64.95€ savings! So get yours now while there's no setup fees with this server!

All servers are Core2Duo with 2Gb or 4Gb DDR2 and S-ATA II HDDs, located in France. No more overage fees!

]]>
<![CDATA[2010 servers shutdown - immediate]]> https://pulsedmedia.com/clients/index.php/announcements/35 https://pulsedmedia.com/clients/index.php/announcements/35 Mon, 09 Aug 2010 14:28:00 +0000 2010 Servers shutdown immediate

2010 servers will be shutdown within this week, so if you have account to one of them and got something you want to copy from there still, you ought to do so now.

2010 servers will be reinstalled and kept offline from Wednesday to Friday forwards. This will finish the 2010 service range.

]]>
<![CDATA[2010 credits/transfers, and master gui update]]> https://pulsedmedia.com/clients/index.php/announcements/34 https://pulsedmedia.com/clients/index.php/announcements/34 Wed, 04 Aug 2010 21:49:00 +0000 2010 Credits & Transfers

If you have missed your credit on the 2010 transfer, please open a ticket and the credit will be immediately straightened out.

As of now, no users should remain on 2010 according to all of our records, and the servers will be shutdown during the next 2 week. There will remain only 2 servers in production as 2009+ series, with the perk offering well beyong 100Mbps peaks. Averages are comparable to those with regular 2009+ series servers.

As per usual, if there's any questions or feedback, do not hesitate to open a ticket. I am very hopeful this 2010 series ordeal will be finally over soon and everything handled.

We still have a vast credit with this provider, but there's no really use, couple development servers and only those 2 production servers. The sad portion is that this provider refuses to reply to any of our communication attempts in any issue, so these 2 last production servers as well will be phased out during upcoming spring, which are paid up till then with the credits.

Master GUI update

Affecting immediately the proxy used in the master gui is located in the server you are using, not a central site. This is for all users, of all versions of our master GUI.

New accounts, and slowly upgrading users to new master gui with automatic rTorrent restarts, rTorrent restart button, on-demand tab loading and small usability tweaks. So all of the features have not reached everyone yet. If you want to expedite this for yourself, please open a ticket, as updates currently are being done manually and as needed basis.

Best Regards,
 Aleksi

]]>
<![CDATA[New IRC channel & network]]> https://pulsedmedia.com/clients/index.php/announcements/33 https://pulsedmedia.com/clients/index.php/announcements/33 Wed, 04 Aug 2010 02:49:00 +0000 Pulsed Media new IRC channel & network

Previously we shared a IRC channel with CSB, we have now created a channel for Pulsed Media at Freenode IRC network. Web interface has been updated to point there. So join us at irc://irc.freenode.net/#PulsedMedia

]]>
<![CDATA[IBAN wiretransfers accepted now]]> https://pulsedmedia.com/clients/index.php/announcements/32 https://pulsedmedia.com/clients/index.php/announcements/32 Sat, 24 Jul 2010 13:14:00 +0000 IBAN wiretransfers are being accepted now

Now you can choose IBAN Wiretransfer during checkout/invoice payment. These are checked once or twice a week, and is a convenient way for making payments by EU residents/people located in countries where IBAN is being utilized.

 

]]>
<![CDATA[2009+ X2 series now available within 48hrs]]> https://pulsedmedia.com/clients/index.php/announcements/31 https://pulsedmedia.com/clients/index.php/announcements/31 Thu, 22 Jul 2010 22:40:00 +0000 2009+ X2 Series, available within 48hrs

Setups for X2 series plans are now available within 48hr setup times, this means the servers used for X2 is now in regular availability and setups will be quick. Now if ever is time to test out the new splendid X2 series, with huge storage and half the users per HDD! That means quite quick speeds for you.

Check out at 2009+ X2 semidedicated seedboxes page!

]]>
<![CDATA[All new 2009+ Starter Plan!]]> https://pulsedmedia.com/clients/index.php/announcements/30 https://pulsedmedia.com/clients/index.php/announcements/30 Sat, 17 Jul 2010 15:26:00 +0000 The all new 2009+ Starter Plan!

2009+ Starter plan is a very cost-effective introductory plan to semidedicated rtorrent seedboxes! For just 5.95€ per month it's an excellent way to view how the rTorrent semidedicated service works, and what you can do with it!

2009+ Starter is based on exactly the same software and hardware than other 2009+ plans with only slight configuration changes.

Don't got a seedbox yet? Maybe it's time to check out the starter plan now or order it directly!

]]>
<![CDATA[Software upgrade for semidedicated accounts]]> https://pulsedmedia.com/clients/index.php/announcements/29 https://pulsedmedia.com/clients/index.php/announcements/29 Mon, 05 Jul 2010 15:54:00 +0000 Software update

Software has updated for new accounts on semidedicated rTorrent services. This update features ruTorrent 3.x, welcome screen enhancements, rTorrent settings finetuning for better overall performance and better performance scaling between account types, and backend scripts updates for better stability and easier management.

This update will overtime be rolled out to existing customers, starting from those who are running oldest versions (no master gui), but we are looking to automate this process from serverside, along with some central management features before rolling it out 100%.

New accounts automatically get the latest version.

If you have feature suggestions to be added before rollout, do not hesitate to open a support ticket.

]]>
<![CDATA[Affiliate program]]> https://pulsedmedia.com/clients/index.php/announcements/28 https://pulsedmedia.com/clients/index.php/announcements/28 Sun, 20 Jun 2010 12:49:00 +0000 Affiliate program

We have an affiliate program as announced previously. It is a great opportunity for any one who has a blog, website, makes frequent forum postings etc. basicly for anyone who is regularly posting something on the internet with the capability of adding links, it's especially great if you reach the P2P audience constantly through forums or blogs.

Let's say you are an active forum member on a largish P2P forum and make 150 postings with your affiliate link in the signature, and average 20 persons click on each of the signature links, that is already 3000 interested eyeballs and if you reach just 5 per 1000 conversion, that is already 15 affiliate signups for a cool recurring 7.5% return. Even for the entry level 2009+ plan it would mean an annual return of (9.95*12*15)*0.075, or 134.32euros in your pocket, for practically no work at all.

Signup to the affiliate program and start posting links!

]]>
<![CDATA[VAT Increase, WHMCS billing bugs]]> https://pulsedmedia.com/clients/index.php/announcements/27 https://pulsedmedia.com/clients/index.php/announcements/27 Fri, 18 Jun 2010 10:42:00 +0000 VAT Increase

Finnish goverment has decided to increase VAT by 1% starting 1st of July, 2010. At this date VAT will be increased to 23% from 22%. This affects only residents within EU and Finnish companies.

WHMCS billing bugs

Rather ironically as WHMCS is "The complete billing & support system", it is ridiculed with billing bugs. This time under fire is the service extensions after invoice payment. Sometimes, not always, but large enough percentage to warrant this notify, WHMCS fails to extend the due date for the service you've paid, despite invoice being marked correctly as paid.

In such cases, do not hesitate to email billing@pulsedmedia.com and we will promptly fix this. We are attempting to find the root cause for this.

]]>
<![CDATA[2009+ X2]]> https://pulsedmedia.com/clients/index.php/announcements/26 https://pulsedmedia.com/clients/index.php/announcements/26 Thu, 03 Jun 2010 06:49:00 +0000 2009+ X2

We are very excited to release today the 2009+ X2 series! This is an extremely high value offering for storage, and is an excellent extension for the entry-level 2009+ series. What could you do with 2760 gigabytes of storage?

2009+ X2 is by other means than the insane amounts of storage same as 2009+, with the known intuitive and easy to use GUI, and other 2009+ perks!

See the offerings at http://www.pulsedmedia.com/2009plus-x2.php and order now at http://pulsedmedia.com/clients/cart.php !

Small offering: 690Gb of Storage, 250Mb Ram, 100Mbps Unmetered starting from 29.95€ per month! Order now from http://pulsedmedia.com/clients/cart.php?a=add&pid=22

Medium offering: 1380Gb of Storage, 500Mb Ram, 100Mbps Unmetered starting from 49.95€ per month! Order now from http://pulsedmedia.com/clients/cart.php?a=add&pid=23

Large offering: 2760Gb of Storage, 1000Mb Ram, 100Mbps Unmetered starting from just 87.50€ per month! Order now from http://pulsedmedia.com/clients/cart.php?a=add&pid=24

]]>
<![CDATA[Catched up]]> https://pulsedmedia.com/clients/index.php/announcements/25 https://pulsedmedia.com/clients/index.php/announcements/25 Sat, 29 May 2010 21:52:00 +0000 Backlog catched up

We have now catched up the backlog, tomorrow are the last accounts to be provisioned from the order backlog which has been haunting us since the start. The servers for the last few accounts are being tested until tomorrow afternoon and after this there's no order backlog remaining.

This means we can finally concentrate on the last few 2010 to 2009+ transfers and clear the refund queue up as well.

We should be completely clear of all provisioning and order related backlogs by mid June, and several day order setups will ensue right after that, along with trial accounts. We will hold of new service releases a little bit further to reserve funds & resource for possible new order peaks.

Infrastructure development

We have been developing our dns cluster lately, and as this is finished there's only a several steps remaining before we get to work on automated new account setups, server to server account moves, suspensions & deletions etc. and central server management. This development in the background infrastructure will allow us to be more flexible than ever before, along with possible new service offerings.

In conclusion, we are doing better than ever!

]]>
<![CDATA[automatic renewals (subscriptions), backlog etc.]]> https://pulsedmedia.com/clients/index.php/announcements/24 https://pulsedmedia.com/clients/index.php/announcements/24 Mon, 24 May 2010 11:53:00 +0000 Automatic renewals

Many of you have made a subscription, while we had setup times of 3+ weeks, and these subscriptions are renewing before even the next invoice is created. Unfortunately, in such a case WHMCS is not able to connect the payment and invoice correctly, nor issuing credit. This means your invoice will be marked unpaid, and overtime to be overdue when it's not just.

In such a case, just e-mail to billing@pulsedmedia.com or open a support ticket via client portal, and we will promptly fix it for you.

Backlog

We approximate that only a bunch of servers are anymore needed to clear the backlog along with finish 2010 to 2009+ transfers, so we are very optimistic about reaching, and delivering the full backlog by 2nd week of June.

Extreme Storage series

Our very high storage series of servers will be available for public order at the moment when we reach 100% delivery on current orders. These offer very high value for high storage. Setup times are estimated to be under 2 weeks, and most likely within couple of days.

]]>
<![CDATA[Clearing the queue]]> https://pulsedmedia.com/clients/index.php/announcements/23 https://pulsedmedia.com/clients/index.php/announcements/23 Thu, 13 May 2010 13:22:00 +0000 Clearing the queue

As most of you know, we've been having a huge setup backlog since day 1, due to overwhelming amount of orders. We are constantly nearing to clearing this backlog, as of this moment we are about 85% delivered.

After next batch of servers we should be able to clear up the remaining backlog, and be able to promise usually within 48hr setups. There is still a few on the backlog, but next batch of servers should clear this up.

We are expecting to finally clear the backlog to early next month with quite high certainty. We are very optimistic this will happen.

]]>
<![CDATA[2010 and 2009+ news]]> https://pulsedmedia.com/clients/index.php/announcements/22 https://pulsedmedia.com/clients/index.php/announcements/22 Tue, 27 Apr 2010 05:33:00 +0000 Hi,

 It's been long since last announcement about the 2010 process and 2009+ series transfer over.

During all this time we've been hard at work behind the scenes, enrolling in 2009+ series, setting up a overwhelming amount of new accounts etc. As it currently stands about 85% of shared plans ordered have either 2010 or 2009+ account, so we are almost there! There is still approximately 20% using 2010 however, a lot of new OVH servers are being needed.

2010 unshaped

2010 was unshaped a long time ago, and we actually forgot to inform about it. You may use the 2010 account provided to you, if you have not been using it.

As the news comes in this late, we will be giving a full month and 7 days of extra credit when transitioned to 2009+ plans for all the shared account users.

2009+ is good and to stay

Users have been really happy for the most part with the very quickly rolled in 2009+ series. OVH indeed has some US transit issues, and US users are having some speed issues for some part. For this we are coming out with US FTP proxy, a server has been acquired already and just will need automation anymore to provide the US FTP proxies.

2010 provider fixing their network

It has been noticed that 2010 provider has been hard at work fixing their network. We cannot detect significant amount of packet loss anymore during network load, and all servers are now able to connect to each other.

This has drawn us to the conclusion that we might be able to use 2010 provider servers for partial enrollment of the remainder of 2009+ series, but only if you, as the end-user approve it. 1 server is in test usage right now, and we are hard at waiting feedback if 2010 gigabit is acceptable as 2009+ series 100mbps server, and our bandwidth consumption habits are acceptable by 2010 provider. We still have a lot of credit bound with 2010 provider, so this would ease delivery of the remaining a lot, and getting back to several days setup times.

Part time working, other behind the scenes activity

As some of you know, I, Aleksi, have been working mostly full time at Pulsed Media for this month. From last week onwards i'm now working part time as i'm working full time as contractor web developer for several companies. Personal transition from a salary earner to contractor means more financial means for Pulsed Media growth, as the income earned as contractor is company income, meaning all Pulsed Media costs are tax deductible from those costs as well.

This means budget for managing Pulsed Media growth is going to increase dramatically and we are better able to scope the financial burden of fast growth in future. As you know, OVH charges quite a bit for server setups, and it's quite a burden for the first term of shared plans.

I am hoping for very bright future for Pulsed Media, and as we grow we are looking forward to enhance the service level constantly by better tools, more intuitive GUI etc. One of these plans is the enrollment of US-based service portfolio in nearby months, including VPS hosting, very large sized shared seedboxes, remote desktop offerings etc. with our own VPS server cluster, consisting only of very powerfull servers located in FDC Chicago datacenter. These are really expensive, high quality servers and the setups alone runs at well over 500euros per server. We are hoping to acquire at least 5 of these servers by fall and have them filled with accounts. Each server comes with 300Mbps dedicated bandwidth, meaning approximately 95Tb outbound transfer per server. These would be high-end plans, with 24x7x365 proactive monitoring, constantly proactively load balanced etc.

Best Regards,
 Aleksi

]]>
<![CDATA[Catching up fast]]> https://pulsedmedia.com/clients/index.php/announcements/20 https://pulsedmedia.com/clients/index.php/announcements/20 Sat, 03 Apr 2010 13:59:00 +0000 Catching up fast!

As many of you know, we had a huge backlog, but we've been catching up really fast and are now at roughly 60% completion on 2009+ transfers & setups. We still need something like 10 more servers to finish completely, but we are now quite close to reaching 100%!

Fast tracking Pulsed Media

Behind the scenes has been happening A LOT during the past couple weeks. Really A LOT!

One of the most important that Aleksi (I) is now working almost full time on Pulsed Media, and that helps a lot! There will be constant backend enhancements, service structure polishing etc.

We are still quite overwhelmed with tasks due to the backlog but it's now easing fast, and being catch up.

2010 Provider

They are unwilling to refund us, even at best partially. Which practically is fraud in EU law. We are still hoping for resolution, but we might need to take legal action. We've yet to see a single dime back from 2010, not even for the undelivered servers.

]]>
<![CDATA[2010 conversion 2009+ speeding up. 2010 Provider promised atleast partial refund]]> https://pulsedmedia.com/clients/index.php/announcements/18 https://pulsedmedia.com/clients/index.php/announcements/18 Mon, 22 Mar 2010 01:42:00 +0000 2010 conversion to 2009+ is speeding up

It has been sluggish start, but is speeding up nicely now! We have 5 servers scheduled for next week, and 3-8 for the next week. However as we are waiting for the refunds from 2010 provider, we have to go for other means of funding the initial acquirements than Pulsed Media's revenue. OVH has setup fees etc. and the initial cost for rolling out 2009+ instead of 2010 is WAY more expensive, infact more than the initial revenue from 2010 series was.

Some of the servers we received infact are unstable. These has been escalated to OVH to be taken care of. Those affected will get some days credit due to the fact the server is most of the time down. Only 1 server has been affected by this.

Full delivery, complete catch up on the backlog is expected to be reached by end of April. I know, i know what you are thinking. But do not worry, we are going by order dates etc. so if you've been waiting long, it means you are getting faster.

Simply cannot wait? You can swap for 2009 plan temporarily, there is some slices available for you to choose from. Still not fast enough? Open a ticket requesting refund and get back to us when the time is right for you. All refund requests will be honored, but please allow some time for it. Opening dispute on paypal will only slower our movement into catching the full backlog.

2010 provider promised atleast partial refund

I will receive official word on this, but it seems likely they are refusing to refund 100% at least initially, and if not 100% refund is not reached we may have to take legal actions. Let's hope it doesn't go that route and 100% refunds are gotten.

Their policy for refunds is within 30days (as almost any European business has), we will see when the refunds actually happen. This means refund has to arrive before end of April, so we can say with high certainty the complete backlog is met by then. No matter is it full or only the few servers still undelivered at first.

In ending

We've been really busy behind the scenes, and share your pain, maybe even more so. We might literally have 50 tickets a day to be handled over here, and that is after our normal day jobs. Yes, we still goto our day jobs. So it might get a bit hectic at times, and you might feel like you have to wait forever for an response to the ticket, but rest assured it's all but forgotten. We are working as fast as possible with what we got, and doing the very best we can provide to you.

Oh but that's not all

We are nearing completion with our US based services planning and hope to have everything planned out nicely soon for a launch. However it's secondary to the 2010 to 2009+ conversions. They are some great plans, with great services, redundant disk arrays etc. So we are hoping to have together one very good service portfolio range when they arrive.

Oh, almost forgot we've been working on a new "master GUI" so to speak of, it will be updated soon for all the seedboxes, and beta tester feedback has been really good. We will see if we can get ruTorrent updated on that update run as well. It's kinda cool and informative. But you'll see yourself soon enough :)

]]>
<![CDATA[Logo competition]]> https://pulsedmedia.com/clients/index.php/announcements/17 https://pulsedmedia.com/clients/index.php/announcements/17 Sat, 20 Mar 2010 19:25:00 +0000 Logo Competition

As you have surely noticed by now, we have utter lack of a logo. It's simply the text "Pulsed Media" in Arial right now. So we are introducing a cool Logo competition!

Winner will take 2 months 2009+ Large shared seedbox service! How cool is that, 69,90€ value!

Our vision for logo is a cool futuristic text going slightly upwards to the left, with a line going slightly under it on the same line and expanding rings around the line, representing a pulse. What is your vision? Show us!

Format requirements
We require 1-color vector version and stylized raster graphics version of atleast 500 by 500 pixels size canvas, preferrably 1000 by 500 pixels sized, where canvas is filled as much as possible. The size ratio should be around 1.6 (golden cut), along with other lines closing or at golden cut.

Styles preferred
New media, what else? ;)
If you want examples, join us at IRC or watch this space for collection.

Valid until
1st of July, or as soon as a winner is found. If no solid candidate is found deadline will be pushed further or a professional graphic designer hired.

Who are eligible?
Everyone except Pulsed Media/NuCode staff and their families.

Submissions
Open a ticket at sales@pulsedmedia.com with URLs to the images or upload the urls on the ticket submission.

Rights to the submission
We withhold the right to show your submission publicly, and to anyone interested in order to rank the best or hold a public voting of the winner.

Rights to the final product
We will hold all rights to the final product after choosing winner.

Deliverables
Known well supported formats are a must.
* Vector version, with real vector graphics (This is to be used as letterhead etc.)
* Rasterized version must be supplied in Adobe Photoshop format and as a high resolution PNG or JPEG.
Basicly something we can use in both print and online medias.

 

]]>
<![CDATA[2009+, 2010 service provider. What is going to happen?]]> https://pulsedmedia.com/clients/index.php/announcements/16 https://pulsedmedia.com/clients/index.php/announcements/16 Wed, 17 Mar 2010 22:32:00 +0000 2010 Provider

2010 service provider basicly screwed us all over really badly with the 2010 service line. 20-31% packet loss within the switch itself, slow deliveries, failing hardware, non-performing network: Highest peak ever was just 300Mbps, it seemed like it's capped to 300Mbps peaks, it seemed so artificial, despite best trying. And you call THAT 1Gbps connectivity? Maybe the port was, but the network sure as hell wasn't!

We have sent refund claim to them, via registered mail. They have stopped responding to any and all support tickets as well, capped servers to 20mbps etc. We had to even debug ourself how to make the hardware stable. They simply wouldn't fix the hardware. Continuous issues happening with 2010 service provider, first hardware was unstable, then routers were bad, then cogent were bad, and finally we traced to the switch our servers are connected to. Despite opening tickets etc. this switch wouldn't be replaced, nor we would even get a notice. Still most of the servers are undelivered, despite ordering ages ago.

In my 10+ years of experience of the hosting industry, i've never encountered such a bad service. Even with VAServ the service actually worked until they had major security break-in, and they wouldn't give me a dinew one. They even tried to bill me for a new one. Not even Leaseweb has given such a bad customer service than 2010 service provider, and we all know that Leaseweb's customer service can be at times quite bad! (Tho, some there really excel in customer service! Big applauds still to Charlie Teuwen over at Leaseweb providing true excellency in customer service.)

2009+ Plans

2009+ has been introduced, as you may have well noticed. After 2010 service provider OVH really is like premium hosting. We know some certainties of OVH: The network and the hardware really will perform at stated levels. OVH you all probably know very well already, provider of very high value per euro servers.

2009+ plans are based on OVH, and are unmetered, uncapped and unlimited servers. Only 8 users per server as well. They are truly amazingly value for you aswell, just 9.95€ per month for 115Gb HDD, 250Mb ram and 100Mbps. That's damn sweet, isn't it?

What is happening with 2010, 2009+ now?

We are transitioning all 2010 users to 2009+, automatically. You may request change to 2009 plan as well if you want to. You will get the full paid time, despite 2009+ being more expensive than 2010.

Furthermore, plan downsizing is allowed. If you have 2010 Large you may change to 2009+ Medium and get 1 extra month. 2010 Medium to 2009+ small gives 1 month as well. If you had longer than quarterly cycle then more extra months might apply. For monthly payments these changes are not possible.

2009 services have Gbit offerings if you want to change to 2009 Gbit offerings, open a support ticket.

Delivery times, etc?

Due to the immense growth spur we've had, and the 2010 ordeal we are very badly behind schedule. Current best case scenario for full delivery all pending orders, and 2010 users is 2 weeks for 100%, but we are expecting more like 3-4 weeks. This is partly due to the first roll out of 2009+ costs WAY WAY more than 2010 revenue was. So even if our money wouldn't be tied with 2010 service provider, it would be somewhat slowish delivery. A lot of funds being tied with 2010 service provider slows deliveries even further. Rolling out a single server is at least 3hr job as well with all the testing etc. involved.

We are getting there, but it's massive amount of work! And rest assured, we are doing the very best we can at a moment like this. We got other things on our plate right now, like the rumoured FDC based premium services portfolio. Yes, that is true. We are going to roll out soon some very interesting offers, with 24x7x365 proactive monitoring, redundant disk arrays, clustered etc.

Best Regards,
  Aleksi

]]>
<![CDATA[2010 Shaped! Plan-B in action.]]> https://pulsedmedia.com/clients/index.php/announcements/15 https://pulsedmedia.com/clients/index.php/announcements/15 Sun, 14 Mar 2010 01:14:00 +0000 The switch on which most of our 2010 servers is bad, and is giving absolutely wierd routing issues, constantly over 11%+ packet loss, and bad speeds.

Today, we contacted the Nth time our provider about this issue, and their answer to this was shaping each of our servers to 20Mbps.!

We never even saw once the advertised 1Gbps, and only peaked to 300Mbps for a total maximum of 30mins per day, if even that.

All this has left us with no other choice than closing 2010 sales and issuing a refund 100% request for all servers. All users will be migrated to 2009+ plans and this should be rather swift operation once 2010 ordeal is summed up. 2009+ is based on OVH and was partly designed to replace faulty 2010.

The swaps will be direct, first term price will be the same, and you will get slightly more HDD for the plan. All 2010 gets high priority on the transfer, and those who are "lucky enough" to have 2010 may utilize them as long as they can ie. as long as the servers are shutdown, even if you already got a 2009+ plan.

Despite the advertised peaks on 2009+ being smaller by order of magnitude (100Mbps VS 1Gbps), we expect 2009+ to be couple orders of magnitude better service for the long term.

During my 10years of experience in the hosting industry i have never, and i do mean never encountered such a bad customer service and this bad false advertising. In other words, nothing at the 2010 provider seems to work.

We are very sorry it came to this! and very sorry for all the people waiting for 2010 in excitement. We do truly share the grief with you. It's been very long and hard days on solving all these issues, lots of stress etc. but i'm confident looking onwards we are back on right track with the 2009+ and yet to be release US based Pro-series, which have guaranteed bandwidth, 24x7x365 proactive-monitoring.

We are definitely targeting in the future for higher reliability and higher quality providers, despite that costing a little bit more, nothing is worth loosing our reputation over.

In the short term we are paying a lot of extra to deliver you the services, so this 2010 to 2009+ transition is actually happening for us on break-even basis but that is money well spent for saving our reputation.

Never has our testing procedure in the past failed us, we screened this provider quite thoroughly with multiple servers, but as it is seen when it was time for serious business, they failed us in EPIC proportions.

Sorry for the inconvenience,
 Aleksi Ursin
 Pulsed Media

]]>
<![CDATA[2010 transferrable to 2009+]]> https://pulsedmedia.com/clients/index.php/announcements/14 https://pulsedmedia.com/clients/index.php/announcements/14 Sat, 13 Mar 2010 18:32:00 +0000 Those who have ordered 2010 prior or at 13rd of March 2010 may change their plan to 2009+ free of charge. First term will be the same price, and service will be delivered a lot faster.

2009+ is placed on OVH, which is a known and solid provider. Will also feature a lot less users per server, slightly more HDD per user, but are limited to 100Mbps port, which is unmetered and will not be shaped down to 10Mbps.

Open a support ticket if you wish for a change.

Best Regards,
 Pulsed Media

]]>
<![CDATA[New hardware delivered]]> https://pulsedmedia.com/clients/index.php/announcements/13 https://pulsedmedia.com/clients/index.php/announcements/13 Fri, 12 Mar 2010 18:59:00 +0000 We got couple new servers today, and deliveries upon them will start during this weekend after testing is finished. Deliveries are still slow, but atleast they are trickling.

Best Regards,
  Pulsed Media

]]>
<![CDATA[Update 4 on bandwidth: Routers switched]]> https://pulsedmedia.com/clients/index.php/announcements/12 https://pulsedmedia.com/clients/index.php/announcements/12 Wed, 10 Mar 2010 22:07:00 +0000 The routers has been switched, and servers are getting A LOT more transfer now, we are seeing quite decent rates now.

Cogent to Orange peering issue however still exists. Also switching routers introduced new problems: The servers can't communicate with each other and thus local peering is not working.

We are starting deliveries again.

]]>
<![CDATA[Update 3 on bandwidth]]> https://pulsedmedia.com/clients/index.php/announcements/11 https://pulsedmedia.com/clients/index.php/announcements/11 Wed, 10 Mar 2010 00:14:00 +0000 The undersized routers will be switched over tomorrow morning when average usage is lower.

We have also been discussing with our supplier on means to solve this for once, and ensure quality of services in future. Discussions will continue tomorrow.

Best Regards,
  Pulsed Media

]]>
<![CDATA[Update 2 on bandwidth]]> https://pulsedmedia.com/clients/index.php/announcements/10 https://pulsedmedia.com/clients/index.php/announcements/10 Tue, 09 Mar 2010 20:17:00 +0000 Our provider is currently installing 2 new switches, the DC MDF (Main distribution frame) and switch for the Class C network our servers are on. This should help quite a bit.

There's currently no ETA from Cogent, despite peering issue has been delegated them long time ago. Peering issue happened when total bandwidth to the DC was increased.

We hope Cogent solves the peering issues swiftly.

Thank you for your understanding,
 Pulsed Media

]]>
<![CDATA[Update on bandwidth issues]]> https://pulsedmedia.com/clients/index.php/announcements/9 https://pulsedmedia.com/clients/index.php/announcements/9 Tue, 09 Mar 2010 14:05:00 +0000 Valued Customers,

We appreciate your taking the time to read this update on the current bandwidth slowness problems.

Since they started roughly 24 hours ago, we have raised 4 tickets with our provider, 1 every 6 hours. They have since replied "the problem was due to a switch lacking the connectivity required and it has been replaced with a much better one". However, a second problem has since arisen which is from THEIR upstream provider, Cogent. They have issued tickets with them as well to resolve the slowness problem which is being caused by slow peering with Cogent.

We do not have any ETA from Cogent nor our provider, but we will keep you notified of any developments. We apologize for the inconvenience of it happening at the start of your service, however please be assured it is our aim to provide a top level service.

We understand you paid for a service which you aren't getting and as such we will give you 3 extra days to make up for the lost time.

Thankyou for your understanding,
PulsedMedia

]]>
<![CDATA[New servers stable, setups begun]]> https://pulsedmedia.com/clients/index.php/announcements/8 https://pulsedmedia.com/clients/index.php/announcements/8 Sun, 07 Mar 2010 13:53:00 +0000 Account setups on this batch of servers has begun, and the servers are now stable. We are expecting to setup upto 35% of current queue of orders in just today and tomorrow.

]]>
<![CDATA[New servers under testing]]> https://pulsedmedia.com/clients/index.php/announcements/7 https://pulsedmedia.com/clients/index.php/announcements/7 Sun, 07 Mar 2010 01:08:00 +0000 The new servers are currently under testing. There has been some instability issues which has to be resolved before we can setup users for these servers. We are doing everything possible we can to ensure high quality of service.

All servers has been able to push very nice network performance (Highest 5+ min peak we've seen on single rTorrent instance has been 600Mbps aggregate, just in the testing alone.), and very nice HDD performance of over 400M/s read. Some of the servers have excessively powerfull CPUs aswell to ensure high level of service quality.

Best Regards,
 Aleksi

]]>
<![CDATA[Sales open again :: New servers inqueue]]> https://pulsedmedia.com/clients/index.php/announcements/6 https://pulsedmedia.com/clients/index.php/announcements/6 Thu, 04 Mar 2010 21:00:00 +0000 Orders will be opened again today, as new servers has been put on order. Orders made today are expected to be delivered around 17th to 24th of March, so still the same 2-3 weeks as before.

Unlike before, this time we are not limited orders. We are fully expecting to catch up to demand for near instant setups by mid to end of April.

Best Regards,
  Aleksi

]]>
<![CDATA[Servers arriving]]> https://pulsedmedia.com/clients/index.php/announcements/5 https://pulsedmedia.com/clients/index.php/announcements/5 Thu, 04 Mar 2010 20:26:00 +0000 Hi,

 We have begun to receive the new batch of servers with first ones details arriving to us. This means that deliveries on this new batch will begin by tomorrow.

Best Regards,
  Aleksi

]]>
<![CDATA[Billing maintenance downtime]]> https://pulsedmedia.com/clients/index.php/announcements/4 https://pulsedmedia.com/clients/index.php/announcements/4 Tue, 02 Mar 2010 22:57:00 +0000 A maintenance downtime is being expected for a period of 1hour 30minutes on the billing system during friday GMT 19:00 or saturday GMT 12:00. During this 1hour 30minute window we will be transferring the site to be hosted onto a grid platform to ensure 24x7x365 availability.

During this period you may not access the client portal. After this maintenance you may experience for a shortwhile in excess of the maintenance window still a maintenace message if your ISP does not obey the set Time-to-live (TTL) values of DNS propagation as per standards.

Phone support if required will be available at +358509303614 even during this period.

]]>
<![CDATA[OpenVPN Connections available!]]> https://pulsedmedia.com/clients/index.php/announcements/3 https://pulsedmedia.com/clients/index.php/announcements/3 Sun, 28 Feb 2010 04:11:00 +0000 There's a nice new addition to our already impressive services portfolio!

Now you can get from us a unlimited OpenVPN tunnel for secure and encrypted internet usage. Yup, that's right. Unlimited.

Not only that, but at lowest just 4.92€ per month :O Not bad huh?

So what are you waiting, order a OpenVPN account already!

 

Q: Is my surfing safe with the OpenVPN?
A: Yes, encrypted from your computer to our server, before leaving to open wild web.

Q: What information do you store about my surfing?
A: Absolutely nothing. We don't even use an DNS cache, all system records are the certificates created.

Q: Can i connect from my windows/mac?
A: Yes! From any platform where OpenVPN is supported, this includes Windows and Mac OSX.

Q: Is it hard to setup?
A: No! With our instructions (windows only) it should be a breeze. For Mac OS X and Linux there is a lot of tutorials available.

Q: How fast is it?
A: Due to encryption there is some overhead, also atleast on Windows the TUN interface maximum speed is 10Mbps. We were able to reach 7-8Mbps downspeeds on our own testing, and peculiarly sometimes way higher upload speeds than the other end connection (ADSL 24/1), upto 3Mbps, we suspect this is something to do with compression that upload speeds peaked at way higher than supposed to. Usually we reached 0.5-0.7Mbps upstream aswell. So, it definitely is not as fast as direct connection, but it does route likely to very far away, and offers the total anonymity from your ISP, avoidance of all throttling and traffic shaping which doesn't target encrypted traffic in general.

]]>
<![CDATA[Affiliate program AND no setup fees on 2010 Small!]]> https://pulsedmedia.com/clients/index.php/announcements/2 https://pulsedmedia.com/clients/index.php/announcements/2 Wed, 24 Feb 2010 02:02:00 +0000 Affiliate program

We have started an affiliate program so you can earn, and get some spare cash in your hands. It's very simple: You tell your friends, or put a link to your website, and you get cash in return.

We are paying You 7.5% of RECURRING revenue. So that means you will get forever, for whatever your referral buys from us 7.5% from even renewals! Tell me, how sweet is that?

Terms for withdrawal are: 40 day holding period (CC clearance etc. u know, the boring stuff), and 50euros.

Applying: Simply login to client portal, and choose affiliate tab to apply & activate affiliate system.

More good news: No more 2010 Small setup fees!

Yup, not even for monthly. Don't believe me? Check yourself at http://pulsedmedia.com/clients/cart.php?gid=2

 

]]>
<![CDATA[New batch of servers ordered]]> https://pulsedmedia.com/clients/index.php/announcements/1 https://pulsedmedia.com/clients/index.php/announcements/1 Tue, 23 Feb 2010 04:43:00 +0000 Dear clients,

 We have just ordered a new batch a servers, infact we ordered too many servers and have a few spots remaining for this batch.
This batch servers are expected to arrive by late next week, thus we start provisioning new accounts then.

 We have extra space for 10-30 more accounts on this batch. The order page will reflect this soon, with more emphasis on large accounts.

Best Regards,
  Aleksi

]]>