Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:


  • Zero-Day Alerts

  • Hacking Team Leak

  • Recent Posts

  • Calendar

    July 2015
    S M T W T F S
    « Jun    
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  
  • Email Subscription

  • About Us


    Author Archive - David Sancho (Senior Threat Researcher)




    Threats that can evade detection are among the most dangerous kind we’re facing today. We see these characteristics in the most challenging security issues like targeted attacks and zero-day exploits. Being able to stay hidden can determine the success of an attack, making it something that attackers continuously want to achieve. In this series of blog posts, we will take a look at one of the techniques used by cybercriminals to evade detection and analysis.

    The Greek word steganos means hidden, and malware loves to hide stuff sneakily. For the bad guys, this is a marriage made in heaven. This is the first of a series of blog posts on steganography and malware. We will explore what steganography is, and how it applies to malicious software today.

    Of course, you can use steganography in real life. An example is putting secret messages in strange places (the image below shows a fun one), but here we’ll be talking about data files and specifically how these can be used and abused by malicious attackers.

    Figure 1. Then-California Governor Arnold Schwarzenegger’s reply to the California Assembly explaining his veto after a legislator insulted him during a speech. The secret message appears when taking the first letter of each line in the main text.

    Not all of the methods I discuss below are traditionally considered as steganography. However, I view the differences as purely semantic. For purposes of this blog series, we will consider “steganography” to be anything that attackers do to hide data in an unexpected channel.

    Hiding data in an unexpected channel has exactly the same result: to fool security researchers into overlooking an innocuous channel, protocol or container where data exchange is not expected (or at least not the kind of data the stego-attacker sends). On to the examples.

    ZeusVM: hiding malware configuration inside JPG images

    A particular variant of ZeuS malware downloaded its settings as a pretty landscape. Yes, a real image. The end of the image contained extraneous data that, when properly decrypted, would become a configuration data file. For all intents and purposes, the image file downloaded is a real image so any security device capturing the communication would only see a beautiful sunset.

    Figure 2. Picture with hidden message downloaded by ZeusVM

    This particular variant was discussed in a March 2014 blog post titled Sunsets and Cats Can Be Hazardous to Your Online Bank Account.

    VAWTRAK hides configuration file in a remote favicon image file

    This insidious banking Trojan has been observed recently hiding its settings in the icon file of a web site. This favicon.ico image is the one displayed by browsers at the left hand side of a URL. Almost every web site contains a favicon.ico image, so security software seeing such a request would never think twice about its validity. On top of this, Vawtrak’s hosting sites are located on the Tor network. This makes them difficult to take down or sinkhole, but that’s a story for another day.

    VAWTRAK’s image hides the message with a technique called LSB (for Least Significant Bits). It consists of altering the colors of the image ever so slightly in order to encode bits of information. For instance, say a given pixel has its color encoded as 0,0,0. This is complete lack of color (i.e., pure black). If the encoded color is changed to 0,0,1 then the pixel would contain one bit of information and become a slightly grayer black (which is undetectable by human eyes).

    Any modified bits can encode the hidden message and anyone with the knowledge that there is a message within the image could retrieve it by performing the reverse operation. Others would simply enjoy the beautiful sunset – or whatever the image happens to show us.

    You can read a full analysis of the steganographic capabilities of VAWTRAK on the SecurityAffairs site. A more complete description of VAWTRAK is also available in the Threat Encyclopedia.

    FakeReg hides malware settings in the app icon

    Websites are not the only sources of icons with hidden data. With at least one malicious Android app (which we detect as ANDROIDOS_SMSREG.A) the main icon (i.e., the one seen on the phone’s screen) – actually contains the encoded info.

    Figure 3. Screenshot of Android icon with the hidden info. The icon has been pixelated due to its pornographic nature.

    The spreitzenbarch forensics blog contains a detailed analysis of this particular threat.

    VBKlip hides data within the HTTP protocol

    The last example I’ll use today is not steganography through image files, but via network protocols. The VBKlip banking Trojan (a threat very specific to Poland) monitors the infected machine looking for Polish bank accounts that have been entered into the machine.

    Once it finds a legitimate account, it replaces the 26-digit number with a different one in order to redirect payments. This new account belongs to a money mule and is received from the C&C in a very unusual way. It initiates a non-sensical HTTP connection to the C&C server which looks similar to this:

    GET g4x6a9k2u.txt HTTP/1.1

    This is a request for a dummy text file nobody cares about. The HTTP response back from the C&C has the meat (note that some of the other HTTP headers have been redacted for brevity):

    HTTP/1.1 400 Site Not Installed
    Server: .V06 Apache
    Content-Type: text/html
    MDEwMTAyMDIwMjAyMDMwMzAzMDMwMzAzMDM=

    The base64 string in the HTTP headers decodes to a bank account number such as 0101-02020202-03030303030303. (I would like to thank Lukasz Siewierski from the Polish NASK for his explanation of this example.)

    The Polish victim sees a different bank account and sends money to a money mule, instead of to the real recipient. The attackers abused an unexpected channel which is used by VBKlip to sneak in some configuration data. This is not strictly steganography but it fits my definition above: it misuses/abuses an unexpected information channel to include information that a defender would never look there for.

    We have (very briefly) covered what steganography is, and what malware uses it for. We can get an idea whether or not it’s something useful for malicious attackers – hint: it is. Our next post will cover other kinds of information that malware conceals: binary executable data. Until then, be safe…

     
    Posted in Malware, Mobile |



    Yahoo recently rolled out a new way for users to access their services without entering a password. Their new system uses a cellphone to authenticate the user. Instead of entering a password, the user receives a verification code via text message on their phone. (The user would have provided their phone number to Yahoo when setting this option up.) Once the user receives this code, they enter it on the Yahoo login page and voilà!, they’re logged in.

    So what’s wrong with this? Is this a wonderful advance in the field of authentication?

    The intentions are good. It is encouraging that online services are putting thought and effort into making their users’ lives easier and more secure. However, a cold analysis of this method finds that it’s not a particularly secure solution.

    Fundamentally, it’s still a single-factor method of authentication. This means that it does not offer any additional security by itself. If the single factor is compromised, you still lose control of your account. In the same way that losing/forgetting your password prevents you from accessing your email, losing or forgetting your phone does the same.

    However, that’s something we already knew. The real question is: is this method more secure than ordinary passwords?

    What this method means is that an attacker who has control of the user’s phone has complete access to his Yahoo account as well. However, some people (like me) forget phones more than they forget passwords. In that case, someone who has my phone would also be able to access my Yahoo account if I left an indication what my Yahoo user name is on the phone. (While the email can be easily accessed from an app, other Yahoo services may be best used via a browser.)

    More than that, text messages themselves can also be intercepted by mobile malware. Sometimes it’s for blackmail and extortion. Other times, it’s to steal online banking transaction codes. Other times, it’s to hide premium subscriptions. It’s not out of the question that a Yahoo user’s account credentials could be stolen in this manner.

    This authentication process can also be used (and abused) for various purposes, for example, if your phone can’t be found but is nearby – if it’s tied to a Yahoo account this way, it can be used as an impromptu phone locator. Similarly, the authentication process can be used to spam or annoy someone who’s tied their phone to their Yahoo account.

    Overall, is it more convenient to use this method? Sure. Is it more secure? Not at all. If you’re going to do this at all, make sure you never forget your phone anywhere – or at least leave it behind less often than you forget your password. Don’t forget to use any security features your phone has (such as screen lock, etcetera) to keep it as secure as possible if it is lost.

    If you do want to secure your Yahoo account, there’s a better way to do so. Yahoo has supported two-step verification since 2011. This also uses a code sent to the phone to log the user in, but this is in addition to the user’s existing password.

     



    Last July, the US Department of Homeland Security warned of a new kind of criminal attack: “Google dorking“. This refers to asking Google for things they have found via special search operators. Let’s look closely and see what this is.

    Google finds things online using a program that accesses web sites: the Google web crawler, called the Googlebot. When the Googlebot examines the web and finds “secret” data, it adds it to Google’s database just like any other kind of information. If it’s publicly accessible, it must be fine, right?

    Now suppose your company’s HR representative left a spreadsheet with confidential employee data online. Since it’s open for everyone to access, the crawler sees and indexes it. From them on, even though it might have been hard to find before, a simple – or not so simple – Google search will point any attacker to it. Google never stored the actual data (unless it was cached), it just made it easier to find.

    This kind of “attack” has been around for as long as search engines have been around. There are whole books devoted to the subject of “Google dorking”, which is more commonly known as “Google hacking”.  Books have been published about it for years, and even the NSA has a 643-page manual that describes in detail how to use Google’s search operators to find information.

    The warning – as ridiculous as it might seem – has some merit. Yes, finding information that has been carelessly left out in the open is not strictly criminal: at the end of the day, it was out there for Googlebot to find. Google can’t be blamed for finding what has been left public; it’s the job of web admins to know what is and isn’t on their servers wide open for the world to see.

    It’s not just confidential documents that are open to the public, either. As we noted as far back in 2013, industrial control systems could be found via Google searches. Even more worryingly, embedded web servers (such as those used in web cameras) are found online all the time with the Shodan search engine. This latter threat was first documented in 2011, which means that IT administrators have had three years to shut down these servers, but it’s still a problem to this day.

    In short: this problem has been around for a while, but given that it’s still around an official warning from the DHS is a useful reminder to web admins everywhere: perform “Google dorking” against your own servers frequently, looking for things that shouldn’t be there. If you don’t, somebody else will and their intentions might not be so pure. Point well taken, thanks DHS!

     

     
    Posted in Targeted Attacks | Comments Off on “Google Dorking” – Waking Up Web Admins Everywhere



    In the second post of this series, we discussed the first two types of attacks involving wearables. We will now proceed to the third type of attack, which can be considered the most damaging of the three.

    High User Risk, Low Feasibility Attacks

    These attacks are considered the most dangerous but these are also considered the least likely to happen. If an attacker manages to successfully compromise the hardware or network protocol of a wearable device, they would have access to the raw data in the ‘IN’ devices but also the ability to display arbitrary content on ‘OUT’ devices.

    These scenarios range from personal data theft to mangling the reality of a camera device. These attacks might affect the wearer adversely and might even stop them from performing their daily routines. These attacks can also have a major impact if these devices are used in a professional setting: a simple Denial-of-Service (DoS) attack could prevent a doctor from operating on a patient or prevent a law enforcement agent from acquiring input data to catch criminals.

    Given that the single, most-used protocol used by these devices is Bluetooth, a quick explanation would be helpful. Bluetooth is a short range wireless protocol similar to Wi-Fi in uses but with a big difference. Whereas Wi-fi has an “access point” philosophy in mind, Bluetooth works like an end-to-end kind of communication. You need to pair two devices in order to make two devices “talk” to each other via Bluetooth. In this pairing process, the devices interchange an encryption key that will serve to establish communication between the two devices. Another difference with Wi-Fi is that Bluetooth tries to minimize radio interference by hopping from one band to another in a pre-established sequence.

    This type of set-up has two main effects on hacking via Bluetooth. One, an attacker needs to acquire the encryption key being used by listening to the paired devices the first time these sync up. Any later than that and the communication will be just noise to the intruder. Two, a DoS attack needs to broadcast noise in a wide range of frequencies in use by the protocol in order for it to have an impact. This is not impossible but such an attack involves a bigger effort than against just any other radio protocol.

    Read the rest of this entry »

     
    Posted in Internet of Things | Comments Off on The Security Implications of Wearables, Part 3



    In the previous post, we talked about the definition and categories of wearables. We will now focus our attention at possible attacks for such devices.

    The possibility of attacks varies largely, depending on the broad category we are focusing on. The probability of attack will increase depending on where the attack can take place. Conversely, the possibilities of physical damage are much more remote as you go further from the physical device. As the attack moves further away from the device, the focus shifts towards stealing the data.

    Low User Risk, High Feasibility Attacks

    These attacks are the easiest to pull off but they have the most limited application against the user. In this scenario, the attacker compromises the cloud provider and is able to access the data stored there.


    Figure 1. Hackers are accessing the cloud provider to get the data

    Read the rest of this entry »

     
    Posted in Internet of Things | Comments Off on The Security Implications of Wearables, Part 2


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice