Is Automation Really Safer for Flying?

Most incidents happen of human error, automation is good. To the 45% of you saying it’s bad remember that most incidents happen coz of human error

Automation does make flying safer, it reduces the workload on pilots. From the most primitive forms of automation like the yaw damper to modern CAT III-B ILS gear, it helps pilots make sound decisions. Of course, automation must not be relied upon, incidents like US1549 and QF32 highlight the importance of good airmanship on the flight deck.

2 Likes

QF32, you can have all your sensors and wires cut by pieces of compressor blade. Can’t rely on those things all the time.

Just a few beers and your life could be over. Can go both ways ;)

1 Like

This is what I think about the flight 1459 incident.

LETS JUST SAY that that flight would have been a drone flight. No pilot on board. And the same issue happened. Birdstrike, you are going down. Now I’m not sure how accurate the movie “Sully” is compared to the real thing but in the movie they stated that it took tons of tries for the NTSB to land the plane safely. But that’s a human decision.
And human decision making time is essentially what caused the plane to land in the Hudson. If you would have eliminated those couple seconds. 30 or whatever then the plane could have landed.

So with automation in planes it would allow for near instantaneous responses. There would be “if then” situations. “If a bird strike happens contact the nearest airport” “if one engine goes out assess blah blah blah”

Get what I’m saying? A computer would do it PERFECTLY every single time compared to the human getting it one time out of 30. There is no error with automation.

Now saying that it would have to be INCREDIBLY developed. Tested, and all that. To make my obligatory Tesla comment in each post I will use autopilot in a Tesla as an example.

Last year a Tesla using autopilot crashed into a semi because the side of the semi blended in with the sky and the computer system didn’t recognize it. This crash happened extremely early in testing (I want to say around the 30 million Mile mark). A plane using this, carrying hundreds of passengers would have to be tested BILLIONS OF MILES in every situation possible. But of course there will always be a situation that had been unheard of before. That’s why the “if then” situations will come in handy “IF a terrorist situation arises on a plane, then notify authorities, and go in to land at the nearest airport” “IF the toilet is clogged, let the attendents know and asses the situation”

Tl;Dr Automation doesn’t allow for human error, and is much more safe flying a plane than a human can be. Given enough testing.

2 Likes

And maybe an early solution for this could be they test the planes with people on it. Have the AP fly the entire flight. Completely autonomous. But have a pilot there as well. Just in case. And as the software and hardware advances slowly transition the pilots out.

1 Like

This is how most AI are taught. I use a robot called Baxter, and teach him fine motor control. After a few tries of missing a cup, let’s say, Baxter will adapt and perfect the movement. It would take a lot more tries in an airplanes, and a lot of safety testing etc, but you could eventually get there.

1 Like

Another thing to think about is imagine that a plane is flying without a pilot, it’s fully controlled by computers and a hacker gets into the system and now they can do whatever they want to with that plane full of people. ( also please don’t get mad at me for sharing my thoughts, if you don’t agree with me share that with me so we can all become a bit smarter! :D )

I think automation can help with pilots and take the workload off but I don’t think it makes it safer by any chance. I wrote a whole essay for my aviation theory class in college. At the end of the day, more times than not it’s true Pilot that brings the plane down safely and not a computer. Human skill and lives more fragile to me than a computer

Explain why that reason has been there sine the early 1950’s then? I’ve flown aircraft with no automation except a barometric height hold and still I was required to demonstrate my manual handling skills twice a year. It’s nothing to do with automation.

Automation assists in allowing long flights with minimum crew. Excessive, interlinked, multi system, condensed display automation has an amazing ability to confuse.

The two must be balanced out. Massively interlinked systems have a wonderful way of failing in manners that the programmers and engineers don’t understand. As long as the current and future generation of pilots understands the fundamental basic concept of ‘it’s an aeroplane’ then they should be able to fly it!!!

Is it safer? Yes, when it works. Is it dangerous? Yes when the base systems failure is masked by the failure of multiple layered systems that require the services of the base failed system. (Normal, Alternate and Direct Laws in the Frogbus for example!!!)

Fly it like an aeroplane!!! (for those kiddies who’ve only ever flown Airbus, remember the pitch power couple in direct law!!!)

As long as the entire system is programmed with either every possible failure and outcome, every variable, every alternate scenario or and AI so developed it can think ‘outside’ the box as in the Hudson River example. Remeber, humans build aeroplanes and systems. Human error will always be there.

An old instructor once told me that there is no malfunction in an aircraft that will kill you instantly. Take the time to assess as you don’t know what the secondary effects of the failure could be. A good example would be the DC-10 severe engine separation where it took the leading edge slats off the wing. The pilots were confused and I’m certain the ‘systems’ would have been too as an engine failure doesn’t normally have such an aerodynamic effect. It had never been seen before!

Some of the worst cock ups I’ve seen in the simulator are where crews have ‘rushed in’ with rapid, albeit, wrong decisions. The scenarios deliberately take into account failures on linked and non linked systems to give the crews something to work through.

The Hudson river is a classic case. High density populated areas, an immediate and unexpected double engine failure. Limited options and an experienced crew. Mistakes were made, that is obvious but we all have 20/20 (perfect) hindsight. The crew did a remarkable job! Where would automation have put the aircraft down? It was going down! Would it have chosen the river? What are the risks verses the rewards? How would the programmer/AI resolve the situation? The flight profile was risky to say the least. A beautiful bit of dead stick flying with a low slung engined aircraft. Not flipping/yawing it into the water was masterful and entirely manual.

So, while decision making in computers remains binary I will never fully trust automation. Rubbish in = Rubbish Out! Correct Pilot error is in all accidents in some way or another but systems failures and errors are in most of them as well!

1 Like

No. It would have made the perfect decision of going to land at the airport, and made all the perfect adjustments. The NTSB proved it was possible.

If something in this industry is automated, everything needs to be. You cannot have automated planes and human air traffic controllers.

@Thomas_Galvin just because they proved it in the simulator, they cannot 100% rely on simulators. Plus, there are many similar events where the pilot has felt unsure about the plane, and aborted takeoff. A computer cannot check the maintenance systems. It cannot check if a screw is loose. And they WILL find bugs, which lives will be lost first?

Another aspect is cyber security. If there is a way to manually control a plane from outside the cockpit, it WILL at some point be hacked. You can hack anything. If there is a way, it’s going to be used.

I don’t think automated planes will happen. I hope it never will.

2 Likes

Would be safer if they were programmed to not interfere with each other.

CPU Core 1: Command PULL UP.
CPU Core 2: Command PUSH THE YOKE (DUE TO LOSS IN AIRSPEED AND POSSIBILITY OF STALLING)

But if the computer didn’t fly the plane, the story would have been different

One broken sensor as well as a broken ‘sensor not operational, turn on backup sensor’ alarm and there’s no way to escape.

There is no such thing as the ‘perfect’ decision. The ‘perfect decision’ based upon events not changing from the time the computer made the decision? Plus the scenario was presented where the incident was expected, planned and timed. Not too accurate.

Anything can be proven in ‘hindsight’. Almost every accident/incident could have been prevented with perfect decisions in perfect hindsight. Unfortunately, in the real world, it doesn’t happen.

Have a look at drone incidents and accidents to see how flawed the system is. Those are incidents and accidents where a human is monitoring, not necessarily directly involved.

Have a look at the processing and manpower behind the Jetstream automated flights in controlled airspace. Dispatchers wave goodbye to a manned flight and all but forget about it. Where’s the automation advantage?

I’ve had Airbus automation drop out when the aircraft rolled unexpectedly to 60 degrees AOB in a squall and the Auto Pilot dropped out as it was outside of the design limits. It would have been an interesting approach if myself or my colleague weren’t there.

In a perfect world, with perfect machines and perfect decisions then perhaps. Unfortunately we do not live or operate in such a world.

1 Like

I think automation is a great thing. What I don’t agree with are all of the talks about replacing pilots with automation.

Automation is cool but ther should be a human overseeing it incase of a failure.

TECHNOLOGY ALONE=UNSAFE(may fail)

TECHNOLOGY+HUMAN=SAFER FLYING(technology helps the pilots)

HUMAN ALONE=UNSAFE(human error)

2 Likes

Yes, that’s how I think as well.

1 Like