Everywhere I go it seems to be that “critical” systems are being attacked. Earlier this year people were talking about whether planes could be hacked. We’ve talked about whether smart grids can be hacked, too. Just a week or so ago, LOT Polish Airlines was almost completely grounded by a distributed denial-of-service (DDoS) attack.
In many cases, these critical systems turn out to have been built on off-the-shelf open-source software. Almost a decade ago, I said that open-source software was safer. While that’s turned out to be mostly true, more recent issues like Heartbleed and Shellshock have illustrated that open-source software has its own problems, too.
Non-technical people may ask: “Why did nobody spot these problems earlier? Are we software developers just too lazy? Did developers forget how to build secure applications?” Basically, they are asking the software community: how did we screw up so badly?
Developing secure code is hard under the best of circumstances, and unfortunately for many developers this has not been a priority. It’s one thing if a game or a browser turns out to be insecure, bad enough as that can be. It’s another thing if a SCADA device that’s part of a power plant fails. It’s another thing if a medical device is hacked and hurts a patient.
As smart devices become more and more prevalent and are used in critical situations, software developers will have to understand that they now have a greater responsibility to keep their software products safe. Perhaps regulators in the relevant industries may need to have put in place new rules covering software security! Given how serious the consequences of bad software can be, this is not as crazy as it sounds.
Just as importantly, we need to decide what does need protecting and what needs to be online. For example, people keep saying: smart meters are safer and will help the power grid. That may be true, but what are the consequences? Who controls these devices? Who has access to this data?
If truly critical devices are going to be put online, they need to be properly secured. The software used must be developed with best practices and hardened to resist exploits. Testing using “black box” methods must also be in place to vet these critical systems against known vulnerabilities and attacks.
More and more critical systems will be connected in the near future. The software industry must behave responsibly in order to ensure that we do not repeat the security mistakes of the past – with more adverse consequences to society at large.