ownCloud vulnerability with a maximum 10 severity rating comes under “mass exploitation”

el_oscuro

Ars Tribunus Militum
1,970
Subscriptor++
I recently got a notice from a life insurance broker I had never heard of that they had lost all of my info in the MOVEit vulnerability. I wouldn't be surprised if someone lost it in this one too. Since I am in every other breach, my credit reports have been frozen for years.

I have been doing code review and pen testing for years and the amount of dumbshittery I see is insane. This XKCD is really right: The entire IT industry sucks at what it does.
voting_software.png
And given the advent of the 737 Max and MCAS, it is especially ironic.
 
Upvote
114 (119 / -5)
Post content hidden for low score. Show…

mgc8

Ars Centurion
332
Subscriptor++
I have been doing code review and pen testing for years and the amount of dumbshittery I see is insane. This XKCD is really right: The entire IT industry sucks at what it does.
voting_software.png
And given the advent of the 737 Max and MCAS, it is especially ironic.
Even more so, given that the tragic MCAS issue was a software engineering problem -- granted, it had been implemented to hide a number of hardware design and logical problems that would never have been allowed to fly in a sane airplane but for regulatory capture, but still... it's software bugs all the way down.
 
Upvote
62 (68 / -6)

Handor

Ars Tribunus Militum
2,501
Lots of fail here, but even without rookie mistakes dependency vulnerabilities are a sort of thing that we have not solved as an industry, and my shop (while operating on a different stack and fairly security-conscious) definitely needs some improvement here. Dependency scans (at the code level) and other automated tooling would make for a big step up. Even with those, however, the supply chain (third party libraries and development tools) is probably one of the biggest areas of concern in this industry... after basic security code hygiene, and privileged user account compromises, and...

Well, yeah, basically the collective "we" have a lot of work to do.

(As an aside, I think I've seen this article image at least once or twice before, but never noticed the 4s in the 'binary'. To be clear, ownCloud's use of a base-5 computer was not implicated in this vulnerability.)
 
Last edited:
Upvote
18 (22 / -4)
This has to be a pretty strange set of circumstances that produced this:

  • That library is ostensibly Microsoft's official library for the Microsoft Graph API. There's no published v0.2.0 or v0.3.0 (skips from v0.1.1 to v1.0.0 in a couple of months) but if it did exist, it would be over six years old.
  • Was the test added by OwnCloud in a fork of the library that they didn't maintain for six years or did they ship a six year old pre-stable lib?
  • They managed to expose the tests to the web (possibly the whole /vendor directory is web accessible - which would be full of plenty of other "not intended for public consumption" stuff)
  • They managed to expose the tests to the web unauthenticated
  • They had a test which dumped the environment

They're all so basic flaws, but they're all required. Plus you have to have creds in the environment, hence why containerised deployments are the most like affected.
 
Upvote
90 (90 / 0)

Forthisyoumusthangthemonkey

Seniorius Lurkius
15
Subscriptor++
I assume that instances protected by Cloudflare Tunnels or some other zero trust proxy would prevent these attacks? Or is that wishful thinking?
My cursory reading of the article and the CVE would confirm that yes, you would be protected if OwnCloud was not facing the internet. E.g., accessible only to clients/users through a VPN, and not exposed directly to the internet.

Unsure why you're being heavily down voted. It was a fair question to ask.
 
Upvote
46 (46 / 0)
Post content hidden for low score. Show…

adamsc

Ars Praefectus
3,337
Subscriptor++
I assume that instances protected by Cloudflare Tunnels or some other zero trust proxy would prevent these attacks? Or is that wishful thinking?

You probably are protected but not because of the tunnel but rather because you were using their WAF. The appeal of services like that is that they have a 24x7 security team who can ship new rules faster than the average person updates their apps. Here’s a recent example of what that looks like:


In this case, there were no updates needed because three decades of PHP breaches caused by leaving that function accessible means that they already block paths with the function name in them (amusingly, I also get errors from the forum software if I mention PHP’s info function by name). I just confirmed that it’s not case sensitive so the path in the advisory should be blocked.

One other note: if you do use a CDN, you really want to follow their documentation about locking down your origin servers so someone can’t hit them directly. I’ve seen at least one case where someone incorrectly believed that they were protected because they didn’t realize that attackers just scan the entire internet (or use tools like Shodan which already did) rather than guessing what IPs they use. That’s the problem which tunneld solves by allowing you not to accept inbound TCP connections at all.

That or other zero trust tools could also help by restricting attacks to people who’ve already authenticated, which in many cases is enough since you probably don’t have an actively compromised user in your organization probing for it right now. That’s not true protection but it’s a great way to buy time, especially if you don’t have a skilled 24x7 ops team watching everything.
 
Upvote
49 (49 / 0)
Post content hidden for low score. Show…
I recently got a notice from a life insurance broker I had never heard of that they had lost all of my info in the MOVEit vulnerability. I wouldn't be surprised if someone lost it in this one too. Since I am in every other breach, my credit reports have been frozen for years.

I have been doing code review and pen testing for years and the amount of dumbshittery I see is insane. This XKCD is really right: The entire IT industry sucks at what it does.
voting_software.png
And given the advent of the 737 Max and MCAS, it is especially ironic.
This xkcd also seems appropriate.
 
Upvote
26 (28 / -2)

WereCatf

Ars Tribunus Militum
2,087
"I will simply operate my own cloud" has always been a deeply stupid meme that severely underestimates the effort and attention required to run hardware and software safely. Umpteenth example.
What? Do you think that commercially run cloud operators got barrels of magic pixie dust or something to sprinkle on their systems to keep them safe and secure?

The difference between a hobbyist running their own cloud and a commercial operation doing it is scale, not the ability. A commercial operation isn't doing anything that a hobbyist couldn't do and when you're just running it for a handful of people, no, it does not require much in the way of effort or attention. All of that is irrelevant, anyway, when the vulnerability was in the upstream software and not caused by anyone running the software to provide the service!
 
Upvote
-9 (20 / -29)
Post content hidden for low score. Show…

Arstotzka

Ars Praetorian
535
Subscriptor++
Even more so, given that the tragic MCAS issue was a software engineering problem -- granted, it had been implemented to hide a number of hardware design and logical problems that would never have been allowed to fly in a sane airplane but for regulatory capture, but still... it's software bugs all the way down.
I apologize, but this Disney-vilification is oversimplifying the issues at play with the 737MAX, which bothers me enough that I am going to write a comment on the internet.

The crashes of the 737MAX touch on a multitude of problems: software engineering is one, but also, bureaucracy, politics, and training. Especially in modern aviation, there is no root cause, and these crashes were no exception. (For those unaware: In the span of a couple weeks, two brand-new fully-loaded 737MAX airframes crashed, killing all aboard.) The purpose of the Maneuvering Characteristics Augmentation System, or MCAS, was to make the 737MAX airframes behave like earlier-model 737s, which meant that no additional training was required for pilots to transfer from the 737 Next Generation airframes. Airlines wanted this, because it reduced their costs, and Boeing wanted it because it made use of a sort of network effect. Airlines who were already flying the 737s, with the massive pool of pilots trained on them, would be less likely to purchase an AirBus 321neo and incur the training costs.

However, as has been widely reported, MCAS had a flaw. If the Angle of Attack (AoA) values were incorrect, it could attempt to compensate and trim the aircraft nose-down. Under normal circumstances, this did exactly what it should have— made the 737MAX fly like the older 737NGs. If the AoA sensor data was wrong, it would never report that the aircraft was flying horizontally, and MCAS would continue to input more and more nose-down pitch. This is something the FAA should have caught, but didn't, because it has been so gutted by politicians that manufacturers perform self-certification of many things, which the FAA is too understaffed to even check those. MCAS both pitched down too much, and too many times, and that is the software flaw. It is also the easy Disney villain of this story, and that's why everyone points at it, but just waving our hands and going "oh, it's software!" is not doing justice to the over 300 dead people. Let's try and do better by them— because even though MCAS had this fault, other factors should have prevented these crashes. Let's talk about politics, bureaucracy, and training.

Most aircraft systems are doubly-redundant, if not triply- or quadruply-redundant. Indeed, the 737MAX has two AoA sensors, which independently feed the displays used by the pilots. So why did MCAS use just the AoA data from the captain's side, so a single failed sensor could cause these unending nose-down inputs? In short: that's how it was ordered. And here is where we need to discuss Lion Air.

Lion Air was chosen by Boeing to be the launch partner for the 737MAX. They ordered a ton of brand-new 737MAX airframes, and their 2018 launch was going to be spectacular. This was a spectacular windfall for Boeing, especially after the merger with McDonald-Douglas, wherein the MBA zombies from the latter company essentially ate the former, turning Boeing from an engineering company into a short-sighted profit-seeking entity. It's a sad sight to see, but that discussion is an even further digression from this digression, so we'll skip it for now. Anyway.

Lion Air had also been barred from flying in the EU up until mid-2016. In fact, when Lion Air agreed to purchase the 737MAX, they weren't allowed to fly in the EU! Some travel advice: never fly on an air carrier banned from operating in the EU. So Boeing had sold their brand-new aircraft, and was going to launch it on an airline that was banned from operating in the EU market. Only a couple years passed between the EU lifting the ban of Lion Air and the 737MAX crashes. When Lion Air ordered these airframes, they opted to not include an AoA disagreement warning light. There was internal discussion at Boeing... but eventually, the customer is always right, even if they shouldn't be, and they sold them the aircraft they wanted. Later, as Boing test pilots were training Lion Air pilots, they wrote memos and documented numerous failures of those pilots to handle situations they considered safety-critical.

If your flying scares a test pilot, it's not a good sign. But nonetheless, Boeing's inertia was in play, and damnit, they were going to sell a lot of airplanes. So the test pilot's reports were stuffed into a folder, and the sales continued, and the launch partnership progressed. Even if MCAS had its faults, the failure mode it presents as is something called "trim runaway". It is exactly what it sounds like— the aircraft trimming is not working, and is going nose-low or nose-high, and the pilots have to push on the yoke to correct it. This can take a huge amount of force. Somewhat recently, an acquaintance-of-a-friend had a trim runaway situation in his airplane on short final, about 400 feet above the ground when coming in for a landing. (It is too small of an airplane to have a flight data recorder, but my impression was he forgot to aviate before navigating or communicating. He made a radio call, but that didn't leave enough time to respond to the emergency.) He crashed and died just short of the runway. Trim runaway is a common-enough problem that pilots are trained for it and must respond quickly. The Lion Air pilots, as per Boeing's own pilot's documentation, were not good pilots. There is serious evidence they cheated on simulators, knowing how to take the test as it were instead of knowing why the answers are correct. Thus, when presented with a trim runaway situation in the real world, they constantly tried re-engaging the autopilot instead of hitting the trim cutout tabs and hand-flying the airplane.

Ultimately, pointing at MCAS and saying it is a software problem is wrong. Likewise, pointing at the pilots and saying they failed is also wrong. This was a very complex problem, and it required poor quality pilots, bad software, corporate ineptitude, dodgy companies selling pencil-whipped AoA sensors, and more to manifest itself. And, sadly it did— twice in a very short period of time.

And, yes, software sucks. The discipline of software engineering could learn a lot from its real-world counterparts (aviation, despite its flaws, included) and the world would be a better place. Sorry if this is a weird tangent, I don't have an owncloud instance to patch, but looking at the aviation industry's response to these situations is somewhat hopeful, even if each lesson-learnt is tragically written in blood. Software engineering can't even manage to learn from coding mistakes that resulted in death (Therac-25) or, worse for capitalists, the loss of money.
 
Upvote
154 (158 / -4)
I apologize, but this Disney-vilification is oversimplifying the issues at play with the 737MAX, which bothers me enough that I am going to write a comment on the internet.

The crashes of the 737MAX touch on a multitude of problems: software engineering is one, but also, bureaucracy, politics, and training. Especially in modern aviation, there is no root cause, and these crashes were no exception. (For those unaware: In the span of a couple weeks, two brand-new fully-loaded 737MAX airframes crashed, killing all aboard.) The purpose of the Maneuvering Characteristics Augmentation System, or MCAS, was to make the 737MAX airframes behave like earlier-model 737s, which meant that no additional training was required for pilots to transfer from the 737 Next Generation airframes. Airlines wanted this, because it reduced their costs, and Boeing wanted it because it made use of a sort of network effect. Airlines who were already flying the 737s, with the massive pool of pilots trained on them, would be less likely to purchase an AirBus 321neo and incur the training costs.

However, as has been widely reported, MCAS had a flaw. If the Angle of Attack (AoA) values were incorrect, it could attempt to compensate and trim the aircraft nose-down. Under normal circumstances, this did exactly what it should have— made the 737MAX fly like the older 737NGs. If the AoA sensor data was wrong, it would never report that the aircraft was flying horizontally, and MCAS would continue to input more and more nose-down pitch. This is something the FAA should have caught, but didn't, because it has been so gutted by politicians that manufacturers perform self-certification of many things, which the FAA is too understaffed to even check those. MCAS both pitched down too much, and too many times, and that is the software flaw. It is also the easy Disney villain of this story, and that's why everyone points at it, but just waving our hands and going "oh, it's software!" is not doing justice to the over 300 dead people. Let's try and do better by them— because even though MCAS had this fault, other factors should have prevented these crashes. Let's talk about politics, bureaucracy, and training.

Most aircraft systems are doubly-redundant, if not triply- or quadruply-redundant. Indeed, the 737MAX has two AoA sensors, which independently feed the displays used by the pilots. So why did MCAS use just the AoA data from the captain's side, so a single failed sensor could cause these unending nose-down inputs? In short: that's how it was ordered. And here is where we need to discuss Lion Air.

Lion Air was chosen by Boeing to be the launch partner for the 737MAX. They ordered a ton of brand-new 737MAX airframes, and their 2018 launch was going to be spectacular. This was a spectacular windfall for Boeing, especially after the merger with McDonald-Douglas, wherein the MBA zombies from the latter company essentially ate the former, turning Boeing from an engineering company into a short-sighted profit-seeking entity. It's a sad sight to see, but that discussion is an even further digression from this digression, so we'll skip it for now. Anyway.

Lion Air had also been barred from flying in the EU up until mid-2016. In fact, when Lion Air agreed to purchase the 737MAX, they weren't allowed to fly in the EU! Some travel advice: never fly on an air carrier banned from operating in the EU. So Boeing had sold their brand-new aircraft, and was going to launch it on an airline that was banned from operating in the EU market. Only a couple years passed between the EU lifting the ban of Lion Air and the 737MAX crashes. When Lion Air ordered these airframes, they opted to not include an AoA disagreement warning light. There was internal discussion at Boeing... but eventually, the customer is always right, even if they shouldn't be, and they sold them the aircraft they wanted. Later, as Boing test pilots were training Lion Air pilots, they wrote memos and documented numerous failures of those pilots to handle situations they considered safety-critical.

If your flying scares a test pilot, it's not a good sign. But nonetheless, Boeing's inertia was in play, and damnit, they were going to sell a lot of airplanes. So the test pilot's reports were stuffed into a folder, and the sales continued, and the launch partnership progressed. Even if MCAS had its faults, the failure mode it presents as is something called "trim runaway". It is exactly what it sounds like— the aircraft trimming is not working, and is going nose-low or nose-high, and the pilots have to push on the yoke to correct it. This can take a huge amount of force. Somewhat recently, an acquaintance-of-a-friend had a trim runaway situation in his airplane on short final, about 400 feet above the ground when coming in for a landing. (It is too small of an airplane to have a flight data recorder, but my impression was he forgot to aviate before navigating or communicating. He made a radio call, but that didn't leave enough time to respond to the emergency.) He crashed and died just short of the runway. Trim runaway is a common-enough problem that pilots are trained for it and must respond quickly. The Lion Air pilots, as per Boeing's own pilot's documentation, were not good pilots. There is serious evidence they cheated on simulators, knowing how to take the test as it were instead of knowing why the answers are correct. Thus, when presented with a trim runaway situation in the real world, they constantly tried re-engaging the autopilot instead of hitting the trim cutout tabs and hand-flying the airplane.

Ultimately, pointing at MCAS and saying it is a software problem is wrong. Likewise, pointing at the pilots and saying they failed is also wrong. This was a very complex problem, and it required poor quality pilots, bad software, corporate ineptitude, dodgy companies selling pencil-whipped AoA sensors, and more to manifest itself. And, sadly it did— twice in a very short period of time.

And, yes, software sucks. The discipline of software engineering could learn a lot from its real-world counterparts (aviation, despite its flaws, included) and the world would be a better place. Sorry if this is a weird tangent, I don't have an owncloud instance to patch, but looking at the aviation industry's response to these situations is somewhat hopeful, even if each lesson-learnt is tragically written in blood. Software engineering can't even manage to learn from coding mistakes that resulted in death (Therac-25) or, worse for capitalists, the loss of money.
This is off-topic, but I'll respond briefly. I think a lot of what you're saying is true, but I disagree with assigning any blame to pilot error. What I've read (see below especially) indicates the pilots were responding as they were trained.

Anyone interested in the 737max debacle should have a look at this article.

Edit for clarification
 
Last edited:
Upvote
41 (43 / -2)
I apologize, but this Disney-vilification is oversimplifying the issues at play with the 737MAX, which bothers me enough that I am going to write a comment on the internet.

The crashes of the 737MAX touch on a multitude of problems: software engineering is one, but also, bureaucracy, politics, and training. Especially in modern aviation, there is no root cause, and these crashes were no exception. (For those unaware: In the span of a couple weeks, two brand-new fully-loaded 737MAX airframes crashed, killing all aboard.) The purpose of the Maneuvering Characteristics Augmentation System, or MCAS, was to make the 737MAX airframes behave like earlier-model 737s, which meant that no additional training was required for pilots to transfer from the 737 Next Generation airframes. Airlines wanted this, because it reduced their costs, and Boeing wanted it because it made use of a sort of network effect. Airlines who were already flying the 737s, with the massive pool of pilots trained on them, would be less likely to purchase an AirBus 321neo and incur the training costs.

However, as has been widely reported, MCAS had a flaw. If the Angle of Attack (AoA) values were incorrect, it could attempt to compensate and trim the aircraft nose-down. Under normal circumstances, this did exactly what it should have— made the 737MAX fly like the older 737NGs. If the AoA sensor data was wrong, it would never report that the aircraft was flying horizontally, and MCAS would continue to input more and more nose-down pitch. This is something the FAA should have caught, but didn't, because it has been so gutted by politicians that manufacturers perform self-certification of many things, which the FAA is too understaffed to even check those. MCAS both pitched down too much, and too many times, and that is the software flaw. It is also the easy Disney villain of this story, and that's why everyone points at it, but just waving our hands and going "oh, it's software!" is not doing justice to the over 300 dead people. Let's try and do better by them— because even though MCAS had this fault, other factors should have prevented these crashes. Let's talk about politics, bureaucracy, and training.

Most aircraft systems are doubly-redundant, if not triply- or quadruply-redundant. Indeed, the 737MAX has two AoA sensors, which independently feed the displays used by the pilots. So why did MCAS use just the AoA data from the captain's side, so a single failed sensor could cause these unending nose-down inputs? In short: that's how it was ordered. And here is where we need to discuss Lion Air.

Lion Air was chosen by Boeing to be the launch partner for the 737MAX. They ordered a ton of brand-new 737MAX airframes, and their 2018 launch was going to be spectacular. This was a spectacular windfall for Boeing, especially after the merger with McDonald-Douglas, wherein the MBA zombies from the latter company essentially ate the former, turning Boeing from an engineering company into a short-sighted profit-seeking entity. It's a sad sight to see, but that discussion is an even further digression from this digression, so we'll skip it for now. Anyway.

Lion Air had also been barred from flying in the EU up until mid-2016. In fact, when Lion Air agreed to purchase the 737MAX, they weren't allowed to fly in the EU! Some travel advice: never fly on an air carrier banned from operating in the EU. So Boeing had sold their brand-new aircraft, and was going to launch it on an airline that was banned from operating in the EU market. Only a couple years passed between the EU lifting the ban of Lion Air and the 737MAX crashes. When Lion Air ordered these airframes, they opted to not include an AoA disagreement warning light. There was internal discussion at Boeing... but eventually, the customer is always right, even if they shouldn't be, and they sold them the aircraft they wanted. Later, as Boing test pilots were training Lion Air pilots, they wrote memos and documented numerous failures of those pilots to handle situations they considered safety-critical.

If your flying scares a test pilot, it's not a good sign. But nonetheless, Boeing's inertia was in play, and damnit, they were going to sell a lot of airplanes. So the test pilot's reports were stuffed into a folder, and the sales continued, and the launch partnership progressed. Even if MCAS had its faults, the failure mode it presents as is something called "trim runaway". It is exactly what it sounds like— the aircraft trimming is not working, and is going nose-low or nose-high, and the pilots have to push on the yoke to correct it. This can take a huge amount of force. Somewhat recently, an acquaintance-of-a-friend had a trim runaway situation in his airplane on short final, about 400 feet above the ground when coming in for a landing. (It is too small of an airplane to have a flight data recorder, but my impression was he forgot to aviate before navigating or communicating. He made a radio call, but that didn't leave enough time to respond to the emergency.) He crashed and died just short of the runway. Trim runaway is a common-enough problem that pilots are trained for it and must respond quickly. The Lion Air pilots, as per Boeing's own pilot's documentation, were not good pilots. There is serious evidence they cheated on simulators, knowing how to take the test as it were instead of knowing why the answers are correct. Thus, when presented with a trim runaway situation in the real world, they constantly tried re-engaging the autopilot instead of hitting the trim cutout tabs and hand-flying the airplane.

Ultimately, pointing at MCAS and saying it is a software problem is wrong. Likewise, pointing at the pilots and saying they failed is also wrong. This was a very complex problem, and it required poor quality pilots, bad software, corporate ineptitude, dodgy companies selling pencil-whipped AoA sensors, and more to manifest itself. And, sadly it did— twice in a very short period of time.

And, yes, software sucks. The discipline of software engineering could learn a lot from its real-world counterparts (aviation, despite its flaws, included) and the world would be a better place. Sorry if this is a weird tangent, I don't have an owncloud instance to patch, but looking at the aviation industry's response to these situations is somewhat hopeful, even if each lesson-learnt is tragically written in blood. Software engineering can't even manage to learn from coding mistakes that resulted in death (Therac-25) or, worse for capitalists, the loss of money.
I would like to subscribe to your newsletter. Seriously.
 
Upvote
26 (26 / 0)

uid1

Ars Centurion
204
Subscriptor
I apologize, but this Disney-vilification is oversimplifying the issues at play with the 737MAX, which bothers me enough that I am going to write a comment on the internet.
Thank you - that was one of the most insightful comments I've read on Ars, or anywhere for that matter, in quite a while.

I appreciate you taking the time to share it.
 
Upvote
29 (29 / 0)

uid1

Ars Centurion
204
Subscriptor
these environment variables may include sensitive data such as the ownCloud admin password, mail server credentials, and license key
Am I missing something, or is that a massive WTF?
This hits containerized deployments especially hard, and in those environments all the infrastructure is spun up from scratch.

In order for a new SQL DB to have a user / table / credentials / privileges created, the data are stored in environment variables, usually in a Docker file.

Then, the env vars are also plugged into a config file for the web app so it can connect to the DB.

This is fairly standard, but running production with phpinfo enabled, aka "tell me everything about the running environment" is poor form, to say the least.
 
Upvote
20 (21 / -1)

accantant

Wise, Aged Ars Veteran
194
Subscriptor++
This has to be a pretty strange set of circumstances that produced this:

  • That library is ostensibly Microsoft's official library for the Microsoft Graph API. There's no published v0.2.0 or v0.3.0 (skips from v0.1.1 to v1.0.0 in a couple of months) but if it did exist, it would be over six years old.
  • Was the test added by OwnCloud in a fork of the library that they didn't maintain for six years or did they ship a six year old pre-stable lib?
  • They managed to expose the tests to the web (possibly the whole /vendor directory is web accessible - which would be full of plenty of other "not intended for public consumption" stuff)
  • They managed to expose the tests to the web unauthenticated
  • They had a test which dumped the environment

They're all so basic flaws, but they're all required. Plus you have to have creds in the environment, hence why containerised deployments are the most like affected.

The version number refers to that of their own "graphapi" integration, not Microsoft's library: https://marketplace.owncloud.com/apps/graphapi

The file in question is still in the latest version of the Microsoft library's repository: https://github.com/microsoftgraph/m...4ef3ee741c230f851cc60025/tests/GetPhpInfo.php

Unfortunately, the owncloud/graphapi repository returns a 404 now, so I can't be sure about the exact mechanics. Considering the pull request is listed as "#48 - fix: remove vendor test folder", I assume they cloned (or submoduled) the library's repository wholesale. Though that still doesn't explain how or why the file would be included/exposed in the actual release.
 
Upvote
21 (21 / 0)

uid1

Ars Centurion
204
Subscriptor
This is off-topic, but I'll respond briefly. I think a lot of what you're saying is true, but I disagree with assigning any blame to pilot error. What I've read (see below especially) indicates the pilots were responding as they were trained.

Anyone interested in the 737max debacle should have a look at this article.

Edit for clarification
That was a fascinating and well-written article and quite enjoyable reading.

But I felt the author (a 40 year software dev and 30 year pilot) assigned blame to the software team a bit unfairly unless there's some documentation that they ignored the perils they were creating.

The people who wrote the code for the original MCAS system were obviously terribly far out of their league and did not know it.

[Citation Needed]



It is astounding that no one who wrote the MCAS software for the 737 Max seems even to have raised the possibility of using multiple inputs

I haven't seen any indication that they ignored it versus were overridden in their concerns.

Those lines of code were no doubt created by people at the direction of managers.

In fact, the overall point of the article seems to agree with the systemic failure thesis:

In the 737 Max case, the rules were also followed. The rules said you couldn't have a large pitch-up on power change and that an employee of the manufacturer, a DER, could sign off on whatever you came up with to prevent a pitch change on power change. The rules didn't say that the DER couldn't take the business considerations into the decision-making process. And 346 people are dead.


Presumably the MCAS was written by developers with similar experience to the other flight management systems, so not ignorant of the importance of redundancy.

The biggest change was the new management structure.
 
Upvote
20 (20 / 0)

byrtolet

Smack-Fu Master, in training
19
I've always been skeptical about the clouds. It can definitely make sence to use a clould if you're a company in a growing phase.

But a personal clould? This askes for problems.
The worse part is that on mobile devices it is easyer to use clould services and apps than to connect to a pc. So instead of making connection & transfer easyer without clould services, some people chose to use clould services to make communication work.
 
Last edited:
Upvote
-18 (4 / -22)
So was the vulnerability created intentionally so it could be exploited? Or was it a flaw in the code that was discovered…
Never attribute to malice that which is adequately explained by stupidity.

The sheer amount of shitty code out there compared to the low number of people theoretically capable of maintaining that code makes the assumption of malicious insertion of vulnerabilities unnecessary. Especially for a target with as few installations as in this case.
 
Upvote
22 (22 / 0)

Ed1024

Ars Scholae Palatinae
642
Subscriptor++
I apologize, but this Disney-vilification is oversimplifying the issues at play with the 737MAX, which bothers me enough that I am going to write a comment on the internet.
Off topic, so spoilered reply:
So why did MCAS use just the AoA data from the captain's side, so a single failed sensor could cause these unending nose-down inputs? In short: that's how it was ordered.
Looking deeper, why was an unsafe (in terms of redundancy) configuration available to order? You can’t spec a 737 with only one engine or one hydraulic system to save money. I put that one squarely on the manufacturer as the purchasing company’s accountants can’t be expected to evaluate safety cases, they just see extra $ on the list.
Even if MCAS had its faults, the failure mode it presents as is something called "trim runaway”... ...Trim runaway is a common-enough problem that pilots are trained for it and must respond quickly.
Yes, but the issue here is that the MCAS failure mode did NOT present as a trim runaway. The checklists in use at the time were quite specific in the definition and trigger: “continuous uncommanded stabiliser motion”. What actually happened was intermittent nose-down input which could be countered and/or overridden by the trim switches. Given the environment this problem was presented in, which was low-level just after takeoff with very distracting false tactile and aural warnings, it is not at all surprising that a trim runaway wasn’t diagnosed as the trim was still working from a pilot PoV!

The 737 trim system is almost constantly in motion during the takeoff phase in normal operation due to speed and configuration changes, either through pilot input or software intervention. Normal: trim moves a bit then stops, abnormal: trim moves a bit then stops. Because a software issue was causing this, stemming from a lack of input sanitisation and any sort of “fail safe” mode, plus a functional description of MCAS did not appear in any pilot-accessible documentation (STS was known about, another software kludge), again it would have been very difficult to diagnose on the day without the knowledge that we have acquired in the years since the accidents. If you disconnected the trim every time it moved of its own accord, it would be being taken out on every flight, just after leaving the ground...
Ultimately, pointing at MCAS and saying it is a software problem is wrong. Likewise, pointing at the pilots and saying they failed is also wrong. This was a very complex problem, and it required poor quality pilots, bad software, corporate ineptitude, dodgy companies selling pencil-whipped AoA sensors, and more to manifest itself. And, sadly it did— twice in a very short period of time.
But the software and associated processes played such a large part in this debacle, I think it is justified to apply a lot of the blame. We don’t know if better pilots could have saved the day: the same was said to begin with Sully’s ditching in the Hudson but the initial sim details were run with pilots who knew exactly what was going to happen and when. Reliability of sensors is not that much of an issue as long as you design the system to cope with an inevitable failure, which wasn’t the case here.

At the end of the day, the safety culture at Boeing and the oversight from the FAA was so bad that it allowed incredibly defective, untested software to be live-installed that had authority over the most powerful flying control fitted to the aeroplane, derived from the unfiltered output of a single vulnerable sensor. Take that out of the equation and we wouldn’t be talking about the crashes now. This is my view having flown the 737 and worked in software (never was that good but what happened here fills me with incredulity...)
 
Upvote
26 (27 / -1)

fyo

Ars Tribunus Militum
1,576
This has to be a pretty strange set of circumstances that produced this:

  • That library is ostensibly Microsoft's official library for the Microsoft Graph API. There's no published v0.2.0 or v0.3.0 (skips from v0.1.1 to v1.0.0 in a couple of months) but if it did exist, it would be over six years old.
  • Was the test added by OwnCloud in a fork of the library that they didn't maintain for six years or did they ship a six year old pre-stable lib?
  • They managed to expose the tests to the web (possibly the whole /vendor directory is web accessible - which would be full of plenty of other "not intended for public consumption" stuff)
  • They managed to expose the tests to the web unauthenticated
  • They had a test which dumped the environment

They're all so basic flaws, but they're all required. Plus you have to have creds in the environment, hence why containerised deployments are the most like affected.

Every production php server I've touched in the past 10 years has had phpinfo disabled.
 
Upvote
4 (4 / 0)
Post content hidden for low score. Show…