Finding unpatched “features” in distro packages

I generally expect baseline distro packages to be “old” by some measure. Even for more forward thinking distros, they generally (mis)equate age with stability. I’ve heard the expression “bug for bug compatible” when dealing with newer code on older systems.

Something about the devil you know vs the devil you don’t.

Ok. In this case, Cmake. A good development tool, gaining popularity over autotools and other things.

Base SIOS image is on Debian 8.x (x=6 at last viewing). Cmake version is 3.0.2 + some patches.

Remember, agestability uber alles.

So I encountered a bug in Cmake, with the FindOpenSSL function. This was in building Julia. Doing some quick sleuthing, I found this patch (for a later version) of Cmake. Looking at the source, it would apply correctly without edits, so I gave it a try (dev machine with our ephermal SIOS boot, no issue if I nuke it by accident … a reboot fixes everything).

Restarted the make and it ran correctly to completion.

So I started looking at Cmake. The distro has 3.0.2 + patches. The patch was for 3.1.2. Out of curiousity … how old is this rev, and are we badly out of date? Looking at the git repo

The 3.1.2 version which fixes this was released 20 months ago. 3.0.2 + patches is more than 2 years old. 3.6.2 is latest stable.

Ugh. Will live with patch for now, but might need to update Cmake on our units to avoid this in the future.

Viewed 108 times by 71 viewers

On expectations

This has happened multiple times over the last few months. Just variations on the theme as it were, so I’ll talk about the theme.

The day job builds some of the fastest systems for storage and analytics in market. We pride ourselves on being able to make things go very … very fast. If its slow, IMO, its a bug.

So we often get people contacting us with their requirements. These requirements are often very hard for our competitors, and fairly simple for us to address.

We’ll get inquiries like this:

We'd like 250TB of storage, replicated, and we need to sustain 10GB/s writes, and 10GB/s reads.    Can you do this?

I made up those numbers, but they are around the same order of magnitude in many cases, and the first digits are also quite similar.

We know what is possible. We know what homebrew/self built systems behave as. We know the ins and outs of making this work.

So we start with a spec, work up a few config/design variants to address this, and offer a spectrum to the person whom contacted us.

A quick segue here. Very high performance, very high efficiency is hard. You can’t simply slap components together and hope it will work. As you quickly discover, it doesn’t. Moreover, it is worth noting that most people read spec sheets and presume … really … presume … that they are going to get the maximum performance of the device … all the time, under all conditions. Many people don’t quite have a mental model of the connection between the IO/computing/network load patterns and the perceived performance.

And also, as part of this segue, they don’t really … have a clue as to how much performance implementations will cost. They look at the consumer grade SSD with 80GB of write per day, do a quick bit of math in their heads, and come out with a number they think will work.

And then they come to us.

Back to the expectations discussion.

So the people with this number of what they think the cost for their 10GB/s read/write system would be. And they tell you.

Since we design and build these things, we actually have a pretty good idea of the actual costs involved. The costs … our cost for materials that can actually meet the requirements when assembled into a system … are often significantly larger than their perceived cost.

Its … almost … depressing.

Not that we are going to lose their business. We run a fairly tight ship, and are very aggressive on our pricing. We like repeat customers … this is how we live and grow. But …

But …

in these instances, we would have to subsidize 3/4 or more of the unit for them.

What makes this sadder is that these are often very well funded startups or large companies doing this. In the past it has been universities and research labs.

I do canvass the market fairly regularly, to see if I am missing something, and to see if someone magically came out with a 10 drive write per day SSD at under $0.05/GB that sustains 500-1GB/s and 100k IOPs on 12g SAS.

There seems to be a disconnect between what people believe they’d like to pay, and what it actually costs (even in raw materials). I know, prices are not really firm. A market is made when a buyer and a seller agree upon a price, and the price may not necessarily reflect portions of the cost of the item.

Performance is a valuable feature. More-so than ever in the past. Being able to design and build high sustained performance systems, and deliver appliances that provide this high performance is a valuable service.

I am not quite sure what to think about this disconnect between reality and people’s expectations. I’m respectful and open with the people doing the inquiry. I help them to understand where they should be looking at for their budget. But we can’t afford to pay people to take our solutions.

A few years ago, a potential partner had come to us with an opportunity at a national lab for a sizable system. We looked at the specs, and then the budget.

The lab wanted the highest end kit, of course. You know that. Their requirements specifically called out what you could or could not do.

Then came the budget. When we looked at it … the pricing was below the lowest end raw disks in market (dense consumer grade drives) that we could get in bulk. Speaking on the side with some of the OEMs, and they completely blanched at providing low margin consumer grade units at these prices, never mind the high margin, highest end units.

Someone did eventually “win” this business. But these wins are pyrrhic. Enough of them and they will go out of business. They had a layoff sometime after this was delivered, who knows if there was a connection. End user is happy because they got a fresh new system at high spec, for a price well under the market rate … well under the actual part costs in the system. The vendor isn’t happy, as they not only lost money on the deal, but thanks to the language around these deals, they can’t do any real marketing, so the win is … well … of low value/quality.

We read the spec, and did a no-bid. We can’t afford “wins” like that.

I dunno. This stuff bugs me.

Real performance will cost some money, and you need to likely have a range of performance concepts in mind to compare to your budget.

Viewed 15218 times by 1309 viewers

Excellent article on mistakes made for infrastructure … cloud jail is about right

Article is here at Firstround capital. This goes to a point I’ve made many many times to customers going the cloud route exclusively rather than the internal infrastructure route or hybrid route. Basically it is that the economics simply don’t work.

We’ve used a set of models based upon observed customer use cases, and demonstrated this to many folks (customers, VCs, etc.) Many are unimpressed until they actually live the life themselves, have the bills to pay, and then really … really grok what is going on.

A good quote:

As an example, five years ago, a company doing video encoding and streaming came to Freedman with a $300,000/mo. and rising bill in their hand, which was driving negative margin the faster they grew, the faster they’d lose money. He helped them move 500TB and 10 gigabits/sec of streaming from their public cloud provider to their own infrastructure, and in the process brought their bill to under $100,000/mo., including staff that knew how to handle their physical infrastructure and routers. Today, they spend $250,000/mo. for infrastructure and bandwidth and estimate that their Amazon bill would be well over $1,000,000/mo.

“You want to go into infrastructure with your eyes open, knowing that cloud isn?t always cheaper or more performant,” says Freedman. “Just like you have (or should have) a disaster recovery plan or a security contingency plan know what you’ll do if and when you get to a scale where you can’t run everything in the cloud for cost or performance reasons. Know how you might run at least some of your own infrastructure, and hire early team members who have some familiarity and experience with the options for doing so.”

By this, he doesn’t mean buying a building and installing chillers and racks. He means leasing colocation space in existing facilities run by someone else, and buying or leasing servers and routers. That?s still going to be more cost effective at scale for the non-bursting and especially monotonically increasing workloads that are found in many startup infrastructures.

In house infrastructure tends to have a very different scale up/out costing model than cloud, especially if you start out with very efficient, performant, and dense appliances. Colos are everywhere, so the physical plant infrastructure portion is easy (relatively). The “hard” part is getting the right bits in there, and the team to manage them. Happily providers (like the day job) can handle all of this, as managed service engagement.

Again, fantastic read. The author also notes you shouldn’t adopt “hipster” tools. I used to call these things fads. The advice is “keep it simple”. And understand the failure modes. Some new setups have very strange failure modes (I am looking at you systemd), with side effects often far from the root cause, and impacts often far from the specific problem.

All software … ALL software … has bugs. Its in how you work around them that matters. If you adhere to the mantra of “software is eating the world”, then you are also saying, maybe not quite so loudly, that “bugs are eating my data, services, networks, …”. The better you understand these bugs (keep em simple), the more likely it is you will be able to manage them.

You can’t eliminate all bugs. You can manage their impacts. However, if you don’t have control over your infrastructure, or your software stack (black box, closed source, remote as-a-service), then when bugs attack, you are at the complete mercy of others to solve this problem. You have tied your business into theirs.

Here’s a simple version of this that impacts us at the day job. Gmail, the pay per seat “supported” version (note the scare quotes around the word supported), loses mail to us. We have had customers yell at us over their inability to get a response back, when we never saw their email. There is obviously something wrong in the mail chain, and for some customers, it took a while to figure out where the problem was. But first, we had to route around Gmail, and have them send to/from our servers in house. The same servers I wanted to decommission, as I now had “Mail-as-a-Service”.

So the only way to address the completely opaque bugs was … to pull the opaque (e.g. *-as-a-service) systems OUT of the loop.

We have not (yet) pulled our mail operation back in house. We will though. It is on the agenda for the next year. I spent maybe an hour/month previously diagnosing mail problems. Now I have no idea if emails are reaching us. If customers sending us requests are getting fed up with our lack of visible response, and going to competitors.

That is the risk of a hipster tool, an opaque tool. A tool you can’t debug/diagnose on your own.

Again, a very good read.

Viewed 19879 times by 1604 viewers

The joy of IE and URLs, or how to fix ridiculous parsing errors on the part of some “helpers”

Short version. Day job sending some marketing out. URLs are pretty clear cut. Tested well. But some clients seem to have mis-parsed the url. Like with a trailing “)”. For some reason. That I don’t quite grok.

I tried a few ways of fixing it. Yes, I know, because I fixed it, I baked it into the spec. /sigh

First was a regex rewrite rule. Turns out the rewrite didn’t quite work the way it was intended, and it killed the requests. The regex works fine (we tested). The web server just did strange things.

Ok, lets try a location block. Craft the same basic thing as the rewrite, but before the main server.

# fix the trailing ")" ... yes ... really ... IE I am looking at you
 location ~ /(.*)\)$ {
        return 301 $scheme://$1;

restart the webserver, test …

and it works.

Not fun, and now the trailing characters are encapsulated in the web spec. But at least those whom are fundamentally challenged in their choice of browser options, can now not have said browser muck up the situation … unless they don’t process redirects/moved …

Viewed 20972 times by 1666 viewers

I don’t agree with everything he wrote about systemd, but he isn’t wrong on a fair amount of it

Systemd has taken the linux world by storm. Replacing 20-ish year old init style processing for a more legitimate control plane, and replacing it with a centralized resource to handle this control. There are many things to like within it, such as the granularity of control. But there are any number of things that are badly broken by default. Actually some of these things are specifically geared towards desktop users (which isn’t a bad thing if you are a desktop linux user, as I am). But if you are building servers and images, you get a serious case of Whiskey Tango Foxtrot dealing with all the … er … features. Especially the ones you didn’t need, know you don’t need, and really want to avoid seeing live and in the field. Ever again.

My biggest beefs with systemd have been some of the lack of observability into specific inner workings during startup and shut down, a seeming inability to control systemd’s more insane leanings (a start event/a shutdown event …), and its journaling infrastructure. The last item is still a sore point for us, as we find it very hard to correctly/properly control logging for a system that should be running disklessly, when we see the logger daemon ignoring the limits we imposed on it in the relevant config files, and filling up the ramdisk. Yeah, not so much fun.

The startup/shutdown hang timeouts are also very annoying, and despite the fact that systemd provides a good control plane for some of this, these delays (which are strictly, and in the absolute sense, completely) unneeded, cause a very poor UX for folks like me whom value efficiency. I do not want systemd trying to automatically start all my network services, and hanging the whole system while it is waiting for network to autoconfigure. I really … truly do not want that. I’ve been looking at how to change this so that it does the startup sanely, but the control plane seems … incomplete … at best here.

The shutdown hangs are due in large part to complexities around the sequencing of things that systemd does, and thinks it knows, and ignores. The things it ignores are what cause the problems, as the dependency graph of shutdowns seems to not know how to deal correctly with things like parallel file systems, object stores, etc. We’ve been working on improving this, and with judicious use of the watchdogs, and some recrafting of various unit files, we have it saner. But its still not perfect.

And don’t get me started on the Intel MPSS bit with systemd.

My point is simple. Systemd tries to do too much, and it messes up IMO, because of this. I’d like it to be a simple control plane. Thats it. Handle start/stop of daemons. Handle that level of its own logging.

I don’t want it to be DNS, and logger, and login, and …

Because when it is all that, things break. Badly.

Our systems are not vulnerable to this bug. And yes, he should have followed responsible disclosure protocol rather than posting a blog entry.

The net of why this bug exists is an assert function. Assert is never, and I repeat, NEVER, something you should use in critical system software. Nor is BUG_ON.

When the revolution comes, the coders who write using BUG_ON and assert will be the first against the wall.

Crashing a core service because you get input you don’t like IS NOT A VALID MECHANISM OF ERROR HANDLING. I’ll argue its somewhat worse than throwing an exception for every “problem” as compared to handling the problem gracefully locally. Exceptions should only ever be thrown for serious things. So add the folks who built exception throwing as the “right” pattern to handle what amounts to a branch control point in a code, to the folks first against the wall.

Divide by zero? Yeah, throw an exception (the processor will). Access memory outside of your allocated segments? Throw an error (OS will). Input value for n to some routine is not 1 or greater? If you answer “I must throw an exception, by using an assert here if this is not true”, then you need to rethink your design. Very … very … carefully.

This bug that the writer alluded to is a simple case of passing a function call a value of zero where it expects a 1 or greater. And rather than gracefully returning a no-op (which would make perfect sense in this context) ….

THERE IS A )_(*&)(*&^*&^%&% assert(n > 0); in the code.

Seriously? WTF? In a core control plane? AYFKM?

The level of fail for others is profound. A simple user, with no elevated privileges whatsoever, can trivially run a DOS attack against a system.

For us, because we are using a slightly dated version of systemd, we just see some daemons stop and restart, but no significant loss in functionality, like others see.

I’ve been non-committal about the whole systemd bit for a while, I have/had hope for it. I am not now against it, but will be now looking for more ways to actively constrain what it can do. We already automatically defang some of its more annoying “features”. Now I am going to spend more time looking at how to turn off some of the functionality that I do not want it to handle itself.

I have no problem with it as a control plane. I have lots of problems with it as the Borg.

Viewed 21427 times by 1717 viewers

Hows this for a nice deskside system … one of our Cadence boxen

For a partner. They made a request for something we’ve not built in a while … it had been end of lifed.

One of our old Pegasus units. A portable deskside supercomputer. In this case, a deskside franken-computer … built out of the spare parts from other units in our lab.

It started out as a 24 core monster, but we had a power supply burn out, and take the motherboard with it. So we switched to a newer motherboard and CPU, but its 16 cores now.

Then the memory. They wanted as much as we could put in. Well, 256GB should work (we hope).

And, of course, fast disk. As fast as possible. When I was done, this little deskside unit was cranking out 6GB/s writes and 10GB/s reads. So, yeah, fast. No NVMe (remember, spare parts, and I didn’t have any spare NVMe around … not quite true, one was in the unit that burnt out, and I wasn’t sure if it burned as well).

Then the graphics. A nice Nvidia card. Ok, I bought this because we didn’t have any modern ones in stock.

And of course, the fans. Gotta keep this beast cool. So we got large silent cpu coolers, and large high CFM at low RPM fans. It is hard to hear the unit, even when you are next to it.

Our SIOS OS, with a nice desktop interface. Our SIOS Analytics Toolchain with all manner of analytical tool goodness.

As I used to call it, it is a muscular desktop.

Way way (way) back when I started at SGI, my manager got me an awesome desktop unit … an R8000 based workstation. Everyone else had R4000 or R3000 based units. I had this floating point monster on my desk. And I used it. I ran lots of my thesis calcs there. It was easily 20 times faster than the old Sun boxes I had access to in the physics department. It was my original muscular desktop.

This one runs circles around that one. Really quickly. I remember my old MD code used to take 1 hour wall clock per time step. Week long runs were common for me. On the R8000, it would be 1 minute per time step (I had tuned the code a bit by then). On units about 10 years ago (AMD Opterons) I was down to 10 seconds or so per time step.

I’ve not done a modern comparison … I really should …

Viewed 22948 times by 1788 viewers

Build me a big data analysis room

This was the request that showed up on our doorstep. A room. Not a system. But a room.

Visions of the Star Trek NG bridge came to mind. Then the old SGI power wall … 7 meters wide by 2 meters high, driven by an awesomely powerful Onyx system (now underpowered compared to a good Nvidia card).

Of course, the budget wouldn’t allow any of these, but it was still a cool request.

Hopefully the room concept/design we put together will fly.

Viewed 18913 times by 1536 viewers

A good read on realities behind cloud computing

In this article on the venerable Next Platform site, Addison Snell makes a case against some of the presumed truths of cloud computing. One of the points he makes is specifically something we run into all the time with customers, and yet this particular untruth isn’t really being reported the way our customers look at it.

Sure, you are paying for the unused capacity. This is how utility models work. Tenancy is the most important measure to the business providing the systems. The more virtual machines they can cram on a single system, the better for them.

But … but …

This paying for vacancy/unused cycles isn’t really the expensive part.

The part that is expensive is getting your data out, or having significant volumes of data reside there for a long time. Its designed to be expensive. And capture data. This is a rent seeking model … generally held to be non-productive use of assets. It exists to generate time-extended monetization of assets. Like license fees for software you require to run your business.

We’ve worked through analyses for a number of customers based upon their use cases. Compared a few different cloud vendors with accurate usage models taken from their existing day to day work. One of the things we discovered rapidly, for a bursting big data analytics effort, with a sizeable on site storage (a few hundred TB, pulling back 10% of the data per month), was that the cloud models, using specifically the most aggressive pricing models available, were more expensive (on a monthly basis) … often significantly … than the fully burdened cost (power/cooling, space/building, staff, network, …) of hosting an equivalent (and often far better/faster/more productive) system in house.

The major difference is that one of these is a capital expense (capex) and one is an operational expense (opex), and they come from different areas of the budget.

For occasional bursts, without a great deal of onsite data storage, and data return, clouds are great. This isn’t traditionally the HPC use case though. Nor is it the analytical services use case.

Interesting read on the article, and the other points are also quite good. But as noted, the vacancy cost is important, but not the only cost involved, nor even the dominant one.

Viewed 13476 times by 1221 viewers

Running conditioning on 4x Forte #HPC #NVMe #storage units

This is our conditioning pass to get the units to stable state for block allocations. We run a number of fill passes over the units. Each pass takes around 42 minutes for the denser units, 21 minutes for the less dense ones. After a few passes, we hit a nice equilibrium, and performance is more deterministic, and less likely to drop as block allocations gradually fill the unit.

We run the conditioning over the complete device, one conditioning process per storage device, with multiple iterations of the passes. After 2 hours or so, and 3 passes, they are pretty stable and deterministic.

Its always fun to watch the system IO bandwidth during these passes. Each system is rocking 18-21 GB/s right now. About 90% idle on CPUs. Banging interrupts/context switches hard, but the systems are responsive.

Actually, while this is going on, we usually do our OS installation if the unit has drives for this.

I like parallelism like this …

Viewed 10723 times by 1070 viewers