Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Zero-Day Alerts

  • Hacking Team Leak

  • Recent Posts

  • Calendar

    July 2015
    S M T W T F S
    « Jun    
  • Email Subscription

  • About Us

    Author Archive - David Sancho (Senior Threat Researcher)

    In our earlier post discussing steganography, I discussed how it is now being used to hide configuration data by malware attackers. Let’s go discuss this subject another facet of this topic in this post: how actual malware code is hidden in similar ways.

    Security analysts will probably throw their hands up in the air and say, “we’ve had code hiding within code for years now, that’s not steganography!”. That’s not what I’m talking about. I will talk about how steganography is used with seemingly innocuous data files that actually hide binary code. If the differences sound small to you, I don’t blame you. Hopefully these examples will make things clearer:

    DarkComet dropper carries a bitmap… with data inside

    This particular variant surfaced in November 2014 and is relatively simple. It’s a .NET executable with a BMP image file in the resource section. While the image has perfectly valid headers, it doesn’t really show anything meaningful. To human eyes it looks like random pixels in all manner of fancy colors. Perhaps it was a mistake on the coder’s part? No. Actually, the pixels are encrypted code, not designed to be looked at. When the file is run, DarkComet takes the image, decrypts the code, and runs it.

    Figure 1. Contents of the bitmap image file

    What’s the point of keeping code hidden this way? That is a fair question. As long as there is decryption code included in the malicious package – and it has to be there – antivirus software still has something to detect. How is this helping the attacker evade detection?

    This scenario makes more sense if you think about the what’s hidden: an off-the-shelf keylogger. Antivirus software is capable of detecting it even as it is being created, so the cybercriminal is trying to keep a pistol with a red bullseye with neon lights under the radar. Is this something that can be accomplished with regular packing/encryption? Sure, but whoever was responsible thought this would be more inconspicuous. Alternately, he may have just learned about steganography and wanted to try out new things.

    Hiding C&C commands in DNS traffic

    It’s not just code that can be hidden – so can command and control (C&C) communication channels. C&C communication mainly happens through HTTP, and it’s not too difficult to spot. Sometimes attackers will come up with some small innovation that makes researchers like myself respect the technique behind it, if not the motive. A well thought-out steganography technique is usually that kind of thing. Let me show you the following example.

    The Morto Trojan uses a very shrewd way of concealing its C&C traffic. Instead of using HTTP, it uses simple DNS requests looking for non-existent domain names. The queried DNS server (which is the actual C&C server) responds by providing the commands inside the response.

    Figure 2. DNS records

    The text being exchanged is further obfuscated by a simple Base64 encoding. This tries to prevent automated systems from spotting the contents straight away, although it does provide clues, since the payload is longer than it needs to be – and therefore, suspicious. An earlier blog post has already discussed Morto’s details.

    JPG images on websites used by TDSS/Alureon for C&C communication

    The now-defunct TDSS botnet also used an unusual method of C&C communication: requesting JPG images that were hosted on popular blogging sites. These images contained C&C commands that controlled the botnet. These files were very difficult to block for two reasons: they were hosted on legitimate, well-known sites, and the files themselves were still valid image files.For all intents and purposes, the Trojan was downloading real images from a blog. Let’s look at a collage of three of those images:

    Figure 3. Images with commands overlaid on top (via Microsoft Technet)

    When decoded and decrypted, the images contained the botnet commands shown overlaid on the image above.

    ShadyRAT hides C&C communication inside HTML code

    This notorious data-stealing spying Trojan also used blogging platforms as a C&C channel, except that the commands are encrypted and encoded into HTML comments, interspersed with what appears to be legitimate content. This makes the traffic look like it comes from a real user visiting a blog with a regular web browser. In fact, the page is not being displayed at all on the infected system; the Trojan just decodes the information within the comments and is able to understand the commands the attacker is sending. On a cursory look to the actual blog, a visitor would never spot any of this, since the comments are never displayed on the browser either.

    This is a perfect vehicle for these attackers, who are trying to stay undetected for as long as possible. ShadyRAT was the first major targeted attack that was spotted in the wild, and this technique was possibly a contributing factor. The network traffic looks perfectly tame to any traffic observer or security device.

    On top of this, ShadyRAT was also able to decrypt and decode C&C commands hidden within JPG files using the LSB technique as seen in the first entry of this series. A shady one indeed.


    So far, I’ve discussed steganography being used to conceal binary code as well as C&C traffic. This is not so much to stop analysts from understanding the information being hidden; it is more to stop researchers from seeing that the information is there at all.

    In my next post, I’ll take out my crystal ball (or tarot deck) and see what the future may bring in this field. Until then, stay safe.

    Posted in Malware |

    Threats that can evade detection are among the most dangerous kind we’re facing today. We see these characteristics in the most challenging security issues like targeted attacks and zero-day exploits. Being able to stay hidden can determine the success of an attack, making it something that attackers continuously want to achieve. In this series of blog posts, we will take a look at one of the techniques used by cybercriminals to evade detection and analysis.

    The Greek word steganos means hidden, and malware loves to hide stuff sneakily. For the bad guys, this is a marriage made in heaven. This is the first of a series of blog posts on steganography and malware. We will explore what steganography is, and how it applies to malicious software today.

    Of course, you can use steganography in real life. An example is putting secret messages in strange places (the image below shows a fun one), but here we’ll be talking about data files and specifically how these can be used and abused by malicious attackers.

    Figure 1. Then-California Governor Arnold Schwarzenegger’s reply to the California Assembly explaining his veto after a legislator insulted him during a speech. The secret message appears when taking the first letter of each line in the main text.

    Not all of the methods I discuss below are traditionally considered as steganography. However, I view the differences as purely semantic. For purposes of this blog series, we will consider “steganography” to be anything that attackers do to hide data in an unexpected channel.

    Hiding data in an unexpected channel has exactly the same result: to fool security researchers into overlooking an innocuous channel, protocol or container where data exchange is not expected (or at least not the kind of data the stego-attacker sends). On to the examples.

    ZeusVM: hiding malware configuration inside JPG images

    A particular variant of ZeuS malware downloaded its settings as a pretty landscape. Yes, a real image. The end of the image contained extraneous data that, when properly decrypted, would become a configuration data file. For all intents and purposes, the image file downloaded is a real image so any security device capturing the communication would only see a beautiful sunset.

    Figure 2. Picture with hidden message downloaded by ZeusVM

    This particular variant was discussed in a March 2014 blog post titled Sunsets and Cats Can Be Hazardous to Your Online Bank Account.

    VAWTRAK hides configuration file in a remote favicon image file

    This insidious banking Trojan has been observed recently hiding its settings in the icon file of a web site. This favicon.ico image is the one displayed by browsers at the left hand side of a URL. Almost every web site contains a favicon.ico image, so security software seeing such a request would never think twice about its validity. On top of this, Vawtrak’s hosting sites are located on the Tor network. This makes them difficult to take down or sinkhole, but that’s a story for another day.

    VAWTRAK’s image hides the message with a technique called LSB (for Least Significant Bits). It consists of altering the colors of the image ever so slightly in order to encode bits of information. For instance, say a given pixel has its color encoded as 0,0,0. This is complete lack of color (i.e., pure black). If the encoded color is changed to 0,0,1 then the pixel would contain one bit of information and become a slightly grayer black (which is undetectable by human eyes).

    Any modified bits can encode the hidden message and anyone with the knowledge that there is a message within the image could retrieve it by performing the reverse operation. Others would simply enjoy the beautiful sunset – or whatever the image happens to show us.

    You can read a full analysis of the steganographic capabilities of VAWTRAK on the SecurityAffairs site. A more complete description of VAWTRAK is also available in the Threat Encyclopedia.

    FakeReg hides malware settings in the app icon

    Websites are not the only sources of icons with hidden data. With at least one malicious Android app (which we detect as ANDROIDOS_SMSREG.A) the main icon (i.e., the one seen on the phone’s screen) – actually contains the encoded info.

    Figure 3. Screenshot of Android icon with the hidden info. The icon has been pixelated due to its pornographic nature.

    The spreitzenbarch forensics blog contains a detailed analysis of this particular threat.

    VBKlip hides data within the HTTP protocol

    The last example I’ll use today is not steganography through image files, but via network protocols. The VBKlip banking Trojan (a threat very specific to Poland) monitors the infected machine looking for Polish bank accounts that have been entered into the machine.

    Once it finds a legitimate account, it replaces the 26-digit number with a different one in order to redirect payments. This new account belongs to a money mule and is received from the C&C in a very unusual way. It initiates a non-sensical HTTP connection to the C&C server which looks similar to this:

    GET g4x6a9k2u.txt HTTP/1.1

    This is a request for a dummy text file nobody cares about. The HTTP response back from the C&C has the meat (note that some of the other HTTP headers have been redacted for brevity):

    HTTP/1.1 400 Site Not Installed
    Server: .V06 Apache
    Content-Type: text/html

    The base64 string in the HTTP headers decodes to a bank account number such as 0101-02020202-03030303030303. (I would like to thank Lukasz Siewierski from the Polish NASK for his explanation of this example.)

    The Polish victim sees a different bank account and sends money to a money mule, instead of to the real recipient. The attackers abused an unexpected channel which is used by VBKlip to sneak in some configuration data. This is not strictly steganography but it fits my definition above: it misuses/abuses an unexpected information channel to include information that a defender would never look there for.

    We have (very briefly) covered what steganography is, and what malware uses it for. We can get an idea whether or not it’s something useful for malicious attackers – hint: it is. Our next post will cover other kinds of information that malware conceals: binary executable data. Until then, be safe…

    Posted in Malware, Mobile |

    Yahoo recently rolled out a new way for users to access their services without entering a password. Their new system uses a cellphone to authenticate the user. Instead of entering a password, the user receives a verification code via text message on their phone. (The user would have provided their phone number to Yahoo when setting this option up.) Once the user receives this code, they enter it on the Yahoo login page and voilà!, they’re logged in.

    So what’s wrong with this? Is this a wonderful advance in the field of authentication?

    The intentions are good. It is encouraging that online services are putting thought and effort into making their users’ lives easier and more secure. However, a cold analysis of this method finds that it’s not a particularly secure solution.

    Fundamentally, it’s still a single-factor method of authentication. This means that it does not offer any additional security by itself. If the single factor is compromised, you still lose control of your account. In the same way that losing/forgetting your password prevents you from accessing your email, losing or forgetting your phone does the same.

    However, that’s something we already knew. The real question is: is this method more secure than ordinary passwords?

    What this method means is that an attacker who has control of the user’s phone has complete access to his Yahoo account as well. However, some people (like me) forget phones more than they forget passwords. In that case, someone who has my phone would also be able to access my Yahoo account if I left an indication what my Yahoo user name is on the phone. (While the email can be easily accessed from an app, other Yahoo services may be best used via a browser.)

    More than that, text messages themselves can also be intercepted by mobile malware. Sometimes it’s for blackmail and extortion. Other times, it’s to steal online banking transaction codes. Other times, it’s to hide premium subscriptions. It’s not out of the question that a Yahoo user’s account credentials could be stolen in this manner.

    This authentication process can also be used (and abused) for various purposes, for example, if your phone can’t be found but is nearby – if it’s tied to a Yahoo account this way, it can be used as an impromptu phone locator. Similarly, the authentication process can be used to spam or annoy someone who’s tied their phone to their Yahoo account.

    Overall, is it more convenient to use this method? Sure. Is it more secure? Not at all. If you’re going to do this at all, make sure you never forget your phone anywhere – or at least leave it behind less often than you forget your password. Don’t forget to use any security features your phone has (such as screen lock, etcetera) to keep it as secure as possible if it is lost.

    If you do want to secure your Yahoo account, there’s a better way to do so. Yahoo has supported two-step verification since 2011. This also uses a code sent to the phone to log the user in, but this is in addition to the user’s existing password.


    Last July, the US Department of Homeland Security warned of a new kind of criminal attack: “Google dorking“. This refers to asking Google for things they have found via special search operators. Let’s look closely and see what this is.

    Google finds things online using a program that accesses web sites: the Google web crawler, called the Googlebot. When the Googlebot examines the web and finds “secret” data, it adds it to Google’s database just like any other kind of information. If it’s publicly accessible, it must be fine, right?

    Now suppose your company’s HR representative left a spreadsheet with confidential employee data online. Since it’s open for everyone to access, the crawler sees and indexes it. From them on, even though it might have been hard to find before, a simple – or not so simple – Google search will point any attacker to it. Google never stored the actual data (unless it was cached), it just made it easier to find.

    This kind of “attack” has been around for as long as search engines have been around. There are whole books devoted to the subject of “Google dorking”, which is more commonly known as “Google hacking”.  Books have been published about it for years, and even the NSA has a 643-page manual that describes in detail how to use Google’s search operators to find information.

    The warning – as ridiculous as it might seem – has some merit. Yes, finding information that has been carelessly left out in the open is not strictly criminal: at the end of the day, it was out there for Googlebot to find. Google can’t be blamed for finding what has been left public; it’s the job of web admins to know what is and isn’t on their servers wide open for the world to see.

    It’s not just confidential documents that are open to the public, either. As we noted as far back in 2013, industrial control systems could be found via Google searches. Even more worryingly, embedded web servers (such as those used in web cameras) are found online all the time with the Shodan search engine. This latter threat was first documented in 2011, which means that IT administrators have had three years to shut down these servers, but it’s still a problem to this day.

    In short: this problem has been around for a while, but given that it’s still around an official warning from the DHS is a useful reminder to web admins everywhere: perform “Google dorking” against your own servers frequently, looking for things that shouldn’t be there. If you don’t, somebody else will and their intentions might not be so pure. Point well taken, thanks DHS!


    Posted in Targeted Attacks | Comments Off on “Google Dorking” – Waking Up Web Admins Everywhere

    In the second post of this series, we discussed the first two types of attacks involving wearables. We will now proceed to the third type of attack, which can be considered the most damaging of the three.

    High User Risk, Low Feasibility Attacks

    These attacks are considered the most dangerous but these are also considered the least likely to happen. If an attacker manages to successfully compromise the hardware or network protocol of a wearable device, they would have access to the raw data in the ‘IN’ devices but also the ability to display arbitrary content on ‘OUT’ devices.

    These scenarios range from personal data theft to mangling the reality of a camera device. These attacks might affect the wearer adversely and might even stop them from performing their daily routines. These attacks can also have a major impact if these devices are used in a professional setting: a simple Denial-of-Service (DoS) attack could prevent a doctor from operating on a patient or prevent a law enforcement agent from acquiring input data to catch criminals.

    Given that the single, most-used protocol used by these devices is Bluetooth, a quick explanation would be helpful. Bluetooth is a short range wireless protocol similar to Wi-Fi in uses but with a big difference. Whereas Wi-fi has an “access point” philosophy in mind, Bluetooth works like an end-to-end kind of communication. You need to pair two devices in order to make two devices “talk” to each other via Bluetooth. In this pairing process, the devices interchange an encryption key that will serve to establish communication between the two devices. Another difference with Wi-Fi is that Bluetooth tries to minimize radio interference by hopping from one band to another in a pre-established sequence.

    This type of set-up has two main effects on hacking via Bluetooth. One, an attacker needs to acquire the encryption key being used by listening to the paired devices the first time these sync up. Any later than that and the communication will be just noise to the intruder. Two, a DoS attack needs to broadcast noise in a wide range of frequencies in use by the protocol in order for it to have an impact. This is not impossible but such an attack involves a bigger effort than against just any other radio protocol.

    Read the rest of this entry »

    Posted in Internet of Things | Comments Off on The Security Implications of Wearables, Part 3


    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice