It is well known that about 50 years ago the first real computer programs started crawling out of the oceans... Even with all the improvements and accumulated knowledge that we have nowadays in our computer-based civilization, users have been mistreated so badly that they almost expect software applications to contain flaws and eventually crash.

There have been huge efforts to make software development an engineering discipline. Specifications, formalizations, design patterns and other techniques have been thrown at developers' faces to make them do things right. But nothing has really changed. Crazy computer-controlled radiation therapy machines overdosed and killed three patients between 1985 and 1987 when started with weird input values that developers didn't expect. Also, when one the versions of probably the most used operating system was released, a parade of patches followed it the same day. And these are just two of countless legendary disasters.

Is it that different from other disciplines? Doesn't an architect have to deal with abstract calculations in which flaws could be lurking? However, buildings don't usually collapse as often as computer programs, do they? They've got standard processes for building houses and they've got standard components to use in each possible situation. You can't say the same for the software industry, though. That's one of the catches. There are loads and loads of libraries, that depend on each other. They might not be well documented, nor bug free. Also, the fact that requirements change and don't usually stop doing it was the cause of failure for lots of waterfall-based projects, but this is something that agile methodologies seem to have mitigated.

In addition to those problems, and what I believe is the root of all evil is that a developer constantly faces logical challenges where the way they implement the solution makes a big difference efficiency wise. These little decisions always aggregate  their effects to the final solution but tracking back a specific effect can be quite hard. Developers always find ourselves using common sense, or the so called best practices, meaning that there's not a standard way. Once a developer starts writing code to fulfill a particular specification, knock on wood or pray to your deity of choice because he or she is painting on a blank canvas ready to be filled with their multitasking thoughts.

Unfortunately, there's no easy solution to making software work as reliably as a mechanical component. However, I believe that Behaviour Driven Development (BDD) is a good move in order to write more reliable software. In this approach, that is one of agile methodologies techniques, tests that state how your application will behave in a specific context, are written first. Then, developers write as little code as possible to make those tests pass. This whole approach focuses the developer's mind. Avoiding erratic thinking that will most likely end up doing something not desired.

I agree, you will probably never be able to write enough tests to assure that your application will not be exposed to the unexpected in a sea of interconnected systems, but most bugs are due to a lack of thinking and therefore testing.

As a conclusion, and as part of the lessons learnt,  it is true that in some critical areas, i.e. aeronautical engineering, software systems are exhaustively verified (components are systematically exposed to inputs and checked for the right outputs). But this does not happen in most commercial web-like applications, where there is a tradeoff between speed of development and reliability. It is probably not worthwhile spending months of formal verification for your brand new 2.0 web site. But we need to aim for doing things better and this will have happened the day automated testing is as common as air.