Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Recent Posts

  • Calendar

    September 2014
    S M T W T F S
    « Aug    
     123456
    78910111213
    14151617181920
    21222324252627
    282930  
  • About Us
    TrendLabs Security Intelligence Blog(breadcrumbs are unavailable)

    Author Archive - Ben April (Threat Researcher)




    The past few weeks have not been good for Bitcoin. Mt. Gox shut down withdrawals due to concerns over transaction malleability. The same flaw was reportedly used to loot more than 4,000 BTC (worth more than 2.7 million US dollars) from Silk Road 2.0 Deep Web marketplace. These stories, together with others that have shaken the confidence of the Bitcoin community, have pushed the value of Bitcoin to just slightly over 600 US dollars, a significant plunge from its peak values of more than $1200.

    So, what is transaction malleability, and why has it caused such a big problem for Bitcoin?

    Imagine I’m going to send you a check numbered 700 for $25. If you know that I only track finances by check number, you can carry out a transaction malleability attack against me as well.

    When you receive the check from me, you change the number by, say, adding a “1″ to the end, making the number 7001. (For the purpose of this discussion, let’s suppose you can do this without the bank noticing.) By itself, changing the check’s number has no impact to the value of this transaction. You then wait, then tell me that you never got the check.

    If I now look in my bank account I see check 7001, but not 700. If I send a new check you win – you cashed two checks from me. Even if I don’t send out a new check, you still have the original check, which can still be cashed.

    This is the same idea behind the transaction malleability vulnerability as exploited in Bitcoin.

    Let’s say you have two parties, Alice and Evan. Alice intends to pay Evan some Bitcoin. Evan wants to scam Alice out of her Bitcoin. When Alice submits the transaction to the Bitcoin network, it is identified by a hash value. One of the elements of the hash value is the ScriptSig field.

    This signature field verifies the authority of this transaction to spend the Bitcoin in the source transaction. These signatures are encoded in a format known as DER. It is rather simple using a type, length, and data construct. One possible type is “0″ or null, or no data. If the attacker adds one or more “0″ bytes to the DER, it is still a valid signature for that transaction, however the transaction will be known by a different hash value.

    Evan would watch the Bitcoin network for his transaction (from Alice to Evan). When he finds one, he takes a copy and edits it so that the hash value changes. She submits many copies of the new “transaction” to the network. If he is lucky, his transaction may beat the legitimate transaction into a block on the block-chain. If Evan is successful, Alice’s transaction will get rejected because the incoming transaction will have already been spent.

    If Alice was only using transaction numbers to keep track of transactions, she would see that the transaction failed, and maybe an automated system would re-issue the transaction. You can see where this is going. If Evan fails, he gets the (original) Bitcoin, if successful, he gets twice the money.

    As scary as this sounds, it is no match for a well-designed payment tracking system. This is a flaw in how exchanges and merchants keep track of payments rather than anything in Bitcoin itself. The hash value should not be used as the sole tracking identifier by Bitcoin users, as it is prone to modification by outside parties. That said, some of these parties are already implementing changes to ensure that the signature can only have one value.

    Always wait for confirmation on outgoing and incoming transactions before considering them paid in full to avoid “double payments” like those that occurred here. Confirm transactions by source address as well as transaction hash.

     



    At the risk of sounding repetitious, there is yet another basic internet protocol that is seeing increased use in distributed denial of service (DDoS) attacks. This time it is NTP, or the Network Time Protocol. It’s not nearly as well known as DNS or HTTP, but just as important. NTP is used to synchronize the time across multiple networked devices  - without it, we’re back to the days where setting the time on your computer had to be done manually. A solution to these attacks has been known for ten years, but unfortunately has not seen widespread adoption.

    The main function of NTP is to distribute the time from high precision sources such as GPS or cesuim clocks to compatible devices. I’m sorry to have to tell you this, but the clock inside your computer sucks. Quartz crystal based clocks usually have an error-rate of about 1 ppm (part per million), or the loss or gain of one microsecond per second. This translates to about a half second of clock-drift per month, which doesn’t sound all that bad.

    Unfortunately, we don’t know which way the clock is drifting at any given time and slight temperature changes can really throw quartz crystals for a loop. In addition, in that half second a 1GHz processor (slow by modern standards) has undergone 500 million clock cycles. With clustered and distributed systems, having a good idea what time it is becomes critical.

    NTP peers exchange UDP packets to compare notes about what time they think it is. A well-configured client will look to three or more peers with better time accuracy then it has. When a peer further from a reference clock believes that its time is no longer accurate, it will make a tiny correction to its clock rate. This allows the system time to change slowly, so that any running software won’t be disrupted. It’s a straightforward solution to an important problem.

    Unfortunately, miscreants are using this critical service to launch DDoS attacks. NTP servers are generally public facing, and will often accept connections from anyone. There is a monlist command that can be sent via UDP to an NTP server that will ask the server to reply with the peers it has recently had contact with.

    This is useful for troubleshooting, but it’s a perfect tool for attackers. Send a small packet with the source address forged to your target and the server will happily send your target a nice blob of data. The busier the server, the more the attack is amplified.

    IT administrators can do some things to avoid becoming an inadvertent accomplice for these attacks. (One thing that will not solve this problem is IPv6, as NTP also functions over it.) First, disable unused services. You would be shocked to know how many systems out there still offer chargen. If a computer is not acting as an NTP server, it does not need to be running the NTP server software. This goes for other unused services and protocols as well.

    Second, consider your configuration of the services you do run. It turns out that at least in the versions of NTP I have here, the monlist command is enabled by default. Unfortunately, this takes time and research on the part of users to find out the risks of every option. A better solution is for applications to provide sensible and secure defaults.

    Third, and most importantly, IP spoofing should be detected and blocked at the network edge. Implementation of BCP-38 often sounds impossible, but it is really not that bad. I know of global backbones that had it in use ten years ago. The key is to focus only on the network edge (this will not scale if done in the core).

    The simplified version is to configure your edge routing devices to only allow incoming packets from an interface IF a reply to that packet could reasonably be routed to that interface. Not only does this prevent NTP, chargen and DNS spoofing attacks from using your network assets to attack others, it prevents all IP spoofing that would cross your network.

    Protocol whack-a-mole is getting old. BCP-38 is to Internet security what hand-washing is to medicine. As more networks become compliant, your security resources can divert their resources from DDoS attacks and start to focus on problems that we don’t have good solutions for yet.

    BCP-38 was published in May 2000. We have known how to remove most current DDOS activity for over 13 years! While it is free ‘as in beer’, it does require technical resources to implement. Consider the savings, however, when you can be confident that your network is NOT participating in a spoofed DDoS attack?

    Unfortunately, implementation of BCP-38 only prevents your assets from being used in an attack. It does not prevent you from being attacked. Spread the word and encourage others to practice good Internet hygiene as well, and perhaps these spoofing attacks can be minimized in the long run.

    Update as of 01:00 PM PST, February 27, 2014:

    We have released new Deep Security rules that provide protection against this vulnerability, namely:

    • 1005907 – NTP Server Unrestricted Query Reflected Denial Of Service Vulnerability
    • 1005910 – Identified ntpd ‘monlist’ Query Reflected Denial Of Service Attack
     
    Posted in Botnets | 1 TrackBack »


    Dec19
    9:55 am (UTC-7)   |    by

    This article, recently published in the Journal of Communications, adds another log to the BadBIOS fire. It has been stated that devices in the BadBIOS case are communicating across an air-gap with commodity PC audio hardware. This paper clearly spells out one workable way to communicate in this way. Even if this doesn’t end up being related to BadBIOS, it has solid potential as a data exfiltration methodology.

    First, I want to clearly state that this is only a communication channel and NOT an infection vector as some articles may lead you to think. At this moment, there is no published technique for infecting a system across an air-gap with audio alone (though you should probably treat Siri with a little more respect from now on).

    This method works by adapting an old system for underwater communication to frequencies approaching the upper limit of human hearing. Human hearing is generally defined as covering the frequency range from 20Hz to 20KHz. For comparison, dogs can hear up into the 60KHz range. Our ability to hear the higher frequencies deteriorates as we age. In this case, that natural degradation creates a gap between the design specifications of common computer hardware (i.e., 20Hz to 20KHz) and what most users can hear (let’s say, up to around 17KHz for this discussion).

    After some initial testing, we found that in our 20 person sample, about 2 might have heard tones at 17.5KHz. So far our testing at 18.5KHz has gone un-detected. Any application with access to the sound hardware on a subject system can communicate this way, noting that Mac and Linux systems may require different approaches in software. All hardware used was off the shelf and unmodified. It is also useful to note that the 18.5KHz tones were not transmitted over the telephone or the videoconferencing links that we tested.

    Hack-a-day has a good demo of this approach working with GNU Radio, though it might be hard to miss GNU Radio running on a system where it doesn’t belong. For our next test, we wanted to look for range, reliability, and the possibility of using commodity software. We were able to use a simple Perl script to convert a text message into Morse code audio at a frequency of 18.5KHz. In a non-prepared environment (e.g., fan noise, machine noise, music, etc. ), we were easily able to decode tones using some simple spectrum analyzer software at over 6 meters or 20 feet.


    Figure 1. The message is “This is a test”, with music playing at the same time from the same system

    I don’t think it’s time for a rack-mountable “cone of silence” just yet. However, I do believe that it’s time we consider ultrasonic (or at least, high frequency) data transmission with stock computer hardware as both possible and practical. At the moment, the best recommendation would be to physically remove audio hardware from any systems that are currently defended by an air-gap and have no need for audio capability.

     
    Posted in Malware | Comments Off



    Throughout all of 2013, there have been numerous revelations about how the NSA conducts mass surveillance on the Internet. These have sent the Internet Engineering community reeling. Protocols that have been in use for decades and based heavily on intrinsic trust have had that trust violated.

    This has caused the Internet standards community to take a look at the need for encryption. Specifically, it’s been discussed whether HTTP/2.0 – the latest version of the protocol that powers much of the Internet – should be encrypted by default. Overall, this is a positive trend, but there are some challenges that should be considered.

    First, encryption without pre-existing trust adds little value. Casual eavesdropping can be prevented, but it is ineffective against a sophisticated operator. Consider, for example, self-signed certificates (often found in small or local web applications). An attacker could easily impersonate the server with a key and certificate that they create and proxy your traffic unencrypted to the real web server, giving them access to read, modify, and inject traffic within your session.

    Second, certificate authorities are not always reliable or secure either. Various CAs like Comodo, DigiNotar, GlobalSign, and Starcom, have all suffered some kind of security incident. One can argue (for a very long time) whether non-trusted CAs or having no encryption is “better”. DANE (specified in RFC 6698) allows service operators to publish keys and certificates within DNSSEC, which means that certificates can be verified without a CA being involved. Challenges like how to deal with typo-squatting domains and compromised DNS infrastructure remain, but it’s technically possible to establish public trusted encryption without the involvement of a CA.. Whether it will be put into wide use is unclear.

    Third, what percentage of the traffic needs to be encrypted? In the past, encryption was used sparingly due to the cost and the increased resources necessary. Banking-related pages and transactions were the most frequent cases where this was done. However, improvements by CDNs and gains in processing power have reduced the relative costs to the point where it is feasible to encrypt all traffic. Many sites are doing just that today.

    Finally, we have to look at encryption primitives themselves.  The security of some of these critical building blocks of security has been called into question. Some are worried that these algorithms have been weakened in such a way that government agencies can decrypt otherwise secure traffic. For example, it has been alleged that the Dual_EC_DRBG random number generator (RNG) has been compromised by the NSA by specifying insecure constants. While by no means a master key, a cryptanalyst would have an enormous head-start if they had any insight into the next number that is likely to come out of a RNG.

    Of course, some would say that wide-scale HTTP encryption is not necessary. “If I haven’t done anything wrong, I have nothing to hide.” These fall rather flat in the face of bulk large-scale data collection by governments. Could simply making the same set of search-engine queries as a terrorist put you on a watch list?

    This may seem like hyperbole, but in the world of big data small similarities often trigger associations that may or may not exist in reality. Did you sell your couch to the brother in-law of a terrorist three years ago on Craigslist? Connections that we would consider insignificant in person can take on new meaning as part of data correlation.

    We have come a long way from the early days of the World Wide Web, where everything was in plain text and images were a novelty. Now that so much of our lives exist online, it is increasingly important to have trustworthy infrastructure behind the services we use. Changes in the threat landscape mean that our infrastructure has to change too. HTTP/2.0 won’t solve all of the problems facing the Internet. However, it is a step in the right direction.

     
    Posted in Bad Sites | Comments Off



    The recent attacks on New York Times, Twitter and others while DNS-related, were not the result of a weakness in the DNS at all. They resulted from weaknesses in domain registrar infrastructure. The DNS components related to this event performed exactly as they were designed and instructed to do.

    While it is true that the malicious instructions were unauthorized, they followed proper channels. Evidence points to The Syria Electronic Army, but investigation is still ongoing.

    The breakdown came when the credentials of a Melbourne IT (a domain registrar) re-seller were compromised. (Please tell me passwords weren’t stored in un-salted MD5 hashes, please?). From there the attacker changed the name-server configuration for the victim domains to their own malicious infrastructure. The DNS took it from there. When the caches started expiring, the new “bogus” data began replacing the good data in DNS replies.

    So what could the victims have done differently to prevent or reduce the impact of this attack? Let’s look at the basics:

    • DNSSEC. DNSSEC could have been helpful IF the stolen credentials were not sufficient to allow the attacker to change the DNSSEC configuration of the subject domain. Otherwise the attacker could have replaced or removed the DNSSEC configuration and carried on with the attack.
    • Domain-locking. Again this would have prevented the change IF the stolen credentials were not capable of disabling the lock feature. The fact that twitter.com was un-touched while some utility domains at Twitter were, indicates that this might be the case. Also it should be noted that depending on the registrar the domain-locking service may come at an additional fee.
    • Domain monitoring. Always a good practice for high-profile domain-names. Configuration elements like name-servers tend to change slowly so any changes can be used to trigger an alert. There are a number of commercial monitoring services to choose from or you could consider using a small shell-script. Note I have no information one way or the other as to whether any of the involved companies had monitoring like this in place. It could not have prevented this scenario, but would have shortened the time to repair.
    • Carefully selecting vendors based on their security posture. This one is HARD to implement right now. Vendors with a weak posture will not advertise that fact. Vendors with a strong posture will likely practice good operational security and be reluctant to share the details of their infrastructure, lest attackers get a roadmap of these defenses.

    One thing we can all learn from these attacks is that dedicated attackers have no problems searching our contacts, connections, vendors and clients for the weakest link in order to gain entry.

     
    Posted in Social | Comments Off


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice