Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Mobile Vulnerabilities

  • Zero-Day Alerts

  • Recent Posts

  • Calendar

    September 2015
    S M T W T F S
    « Aug    
  • Email Subscription

  • About Us

    Author Archive - Morton Swimmer (Senior Threat Researcher)

    We continue our look into the state of cryptography in 2014; Part 1 was posted earlier this week.

    Is Hardware Security Any Better?

    We closed the first post by asking: is hardware any more trustworthy? One would think that it is… but it’s not. Recently, chip vendors have been incorporating cryptography into their CPUs or chipsets. Usually, this is an implementation of a “standard” cipher (like AES) or a pseudorandom number generator (PRNG).

    Despite all the revelations from Edward Snowden about the NSA subverting various cryptographic algorithms (in particular, the Dual_EC_DRBG PRNG that NIST published in 2006), we think that AES does not have any backdoors or exploitable flaws. However, what if AES was compromised? Now you have encryption hardware that can’t be used. If the raw implementation turns out to be faulty, it can’t be fixed but some libraries will use it anyway because it’s there.

    Whether a given algorithm is flawed or badly implemented is besides the point, however. If it’s baked in hardware, it can’t be fixed or disabled after the fact. Intel, AMD and ARM (and probably others) implemented the entire AES algorithm as a discrete instruction. Wouldn’t it have been wiser to implement common cryptographic primitives that could be used to implement any algorithm of choice, with each primitive being thoroughly tested? Food for thought.

    This isn’t a theoretical problem anymore. FreeBSD, OpenBSD’s cousin project, has decided that the pseudo-random number generators in Intel and VIA chipsets cannot be trusted, and are not using them without other augmentation. Whether their doubts are founded isn’t clear, but this should be a wake-up call to scrutinize hardware more thoroughly.

    Another way that hardware gives you deceptive cryptographic security is in key and algorithm secrecy. The GSM consortium thought it could keep the A5/1 and A5/2 algorithms secret by implementing in licensed chips. (A5/1 and A5/2 are algorithms used to encrypt voice traffic in GSM celleular networks.)

    In 1994, a researcher showed that reversing chips wasn’t all that difficult. In 2003, the algorithms were found to be faulty (and note that A5/2 was, by design, weaker than A5/1.)

    Meanwhile, hackers have made it a hobby to extract keys from firmware that was not designed to be read from outside and have made chip reversing a sport. It’s pretty ugly out there.

    Black Swans

    If you consider intelligence agencies to be your adversary, then even the rumors that don’t involve weakening cryptography will worry you. The fact that they collect immense amounts of data, seem to have unheard-of levels of computational power, and employ much of the cryptographic brain power is, frankly, scary. There is a good chance that, given enough reason to do so,  they can break most of the commonly used cryptography we currently use. Here, the best defense is to keep evolving; keep getting better at it.

    Even if you are not worried (or just resigned) to threats from such well-funded adversaries, there is another worry: Quantum computing, should it ever grow out of the lab, is capable of performing all of the computation needed to break most cryptography at once. This is not an incremental change. This is the Black Swan of computer security. Luckily, the current crop of adiabatic quantum computers that can be bought are both very expensive and not proper quantum computers. So, these are not a threat. Yet.

    But – won’t quantum cryptography come to the rescue? Perhaps. It’s more limited in scope than traditional cryptography, so it wont replace everything, but it might add a few new features. It’s also closer to being real. In the early 2000s, researchers in Switzerland set up a fiber optic link between Geneva and Lausanne and carried out a successful quantum key distribution (QKD). There are still many problems with the technology, but it seems closer to the horizon.

    This shouldn’t distract us from smaller Black Swans that can appear any time in the form of faults in commonly used algorithms or their implementations. Such events will happen – when you least expect them.


    We rely on cryptography for our privacy and integrity in nearly everything we do nowadays. There is no boundary between the Internet and our everyday lives any more and the only thing keeping us safe is often some cryptographic protocol or algorithm. However, we need to stop thinking of cryptography as a product that you drop in and – presto! – I’m secure. It’s a process that doesn’t stop. We constantly need to be vigilant and be prepared to replace outdated, insecure algorithms and protocols at a moment’s notice. This may entail migrating data from one cipher to another and updating a large number of keys.

    It’s not easy and mistakes will be made, but perhaps the most positive aspect of Heartbleed is that most of the most prominent websites were able to update within a very short time. For many of us, our Web experience was only insecure for a relatively short period of time.

    However, many sites and services in the long tail of the Internet are still vulnerable. This shows that not everyone has good security practices in place. Worse yet, the devices that surround us often incorporate the same sorts of technologies. Who is keeping the Internet of Everything updated? Is there even a processes for it? Will hackers be able to turn my home against me by abusing some vulnerability in one of my devices just because the vendor doesn’t care or doesn’t know how to fix it?

    Most cryptography is sound, in principle, and in this moment. However, the threats to it evolve, and so does cryptography itself evolve in response. The users of cryptography must continuously adapt.

    Posted in Bad Sites | Comments Off on The State of Cryptography in 2014, Part 2: Hardware, Black Swans, and What To Do Now

    It seems like cryptography has been taking a knock recently. This is both good and bad, but is not actually true: cryptography is always under attack, and for that reason constantly evolves. That’s bad, but it’s good to realize that cryptography needs constant attention. The threat to cryptography can be very disruptive, as we most recently saw with Heartbleed, and more distantly with ‘issues’ in various algorithms like RC4, MD5, SHA1 and Dual_EC_DRBG (all of which should not be used any more, by the way.)

    Should we trust cryptography? No! Yes! Maybe!

    Cryptography is both an art and a science. A lot of the math involved in creating a good cryptographic algorithm or protocol is solid, but also subtle. There are perhaps only a few dozen individuals in the world who are immersed enough in information theory, statistics, and algorithms to produce strong algorithms. Rare as they are, these people may not be the right choice for cryptanalysis – the flip-side of designing a cipher: breaking it. You need both these disciplines – cryptography and cryptanalysis – to produce a secure cryptographic algorithm.

    There is no deterministic way (yet) of creating a good cipher or other cryptographic algorithm, so creativity is also required. Useful algorithms like one-way hashes (MD5, for example) were created on initial informed hunches and then rigorous analysis. Every time a candidate is produced, it is attacked until successful and the results cycled back to the design. In the end, an algorithm is produced that is believed to be strong enough, by current standards of creativity and ingenuity.

    Despite that, these algorithms are eventually found to be broken. Consider MD5 and SHA1 back in 2004, where researchers found that they were demonstrably not collision resistant (a necessary prerequisite for using a hash algorithm for certification), after previous researchers had previously found flaws that, at the time, were not proved to be exploitable.

    The 2004 study led to further attacks against the SSL/TLS protocol that secures our web browsing experience. Yet, even today, MD5 is still being used in SSL/TLS certificates – based on the data from the Crossbear SSL project. Even though it is believed that some of the vulnerabilities of MD5 are mitigated in this use case, attacks only get better, so using MD5 is still not recommended. More worryingly, it is not clear that practical, provably secure cryptographic hash functions can exist that fulfill the necessary criteria for such a function. Some still use MD5 for other purposes, and that is fine – so long as you know what you’re doing.

    However, that is what I hope you’ll be able to take away from this post. Cryptography evolves, and users have to keep up with developments in the state of the art directly or indirectly. Microsoft, for example, has already announced that they will reject SHA1-based certificates from 2016 onwards. If Microsoft is your vendor of choice, this is good news:  it means that at least in this, they have your back. Some vendors are good at security, some less so. What you choose to use may affect your security.

    Fragility of cryptography

    Because the math behind cryptography is rather subtle and specialized, getting to a secure implementation of any given algorithm is hard. If there ever was a case for a “programmer’s driver’s license”, cryptography is it. So many exploitable bugs have been found in implementations of cryptography, that my advise has always been to use a trusted open source implementation whenever possible, or contract the programming out to actual cryptographers who know the math. Victor Shoup, a leading cryptographer, maintains his own C++ library because he felt uncomfortable with using a library not built with security in mind.

    And then Heartbleed happened. OpenSSL is an open source implementation of the SSL/TLS protocol and associated algorithms; it should (and was) a beacon of how cryptography needs to be implemented. However, bugs happen, and the real tragedy of Heartbleed was not that the bug (admittedly, a rather stupid one) was created, but that the bug was not found for more than two years. This was not supposed to happen in an open source project, where changes are public and can be immediately peer reviewed.

    As a knee-jerk reaction, the OpenBSD community signaled no confidence in the stewardship of OpenSSL and forked it, creating LibreSSL. Given that their first act was to remove a lot of the legacy platform code (oh no, you can’t run LibreSSL on VMS anymore), it is rapidly becoming a OpenBSD-only library. All of us will lose if this approach is taken more often, as you get to the stage where the two code forks are so different that sharing security patches is very difficult. Postfix was created as a reaction to the bloat of Sendmail, but ended up nearly as bloated and only a little less buggy. Will this be the fate of LibreSSL?

    It should be pointed out that Heartbleed had nothing to do with the actual cryptography in OpenSSL. It did have serious security consequences, though, and demonstrates that security problems can arise in nearly any piece of software.

    In Part 2: Is hardware-based cryptography any safer? (Short answer: it isn’t.) Black swans, and takeaways.

    Posted in Bad Sites | Comments Off on The State of Cryptography in 2014, Part 1: On Fragility and Heartbleed


    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice