Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Recent Posts

  • Calendar

    July 2014
    S M T W T F S
    « Jun    
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • About Us
    TrendLabs Security Intelligence Blog(breadcrumbs are unavailable)

    Author Archive - David Sancho (Senior Threat Researcher)




    During one of our brainstorming sessions looking for interesting research projects, our group thought about how most mobile applications are, in essence, “browsers-in-a-box”.

    Let me explain. When you open your favorite app, chances are it tries to access certain web pages and display the results in a certain way. Not all mobile apps do this, just most of them. I’m not only talking about those Amazon or eBay apps (which obviously do behave like browsers that limit their queries to their specific servers) but also apps like Flipboard. I love Flipboard, but at the very core of it, it’s just accessing Facebook and Twitter and displaying it in a pretty way (okay, very pretty).

    Can this make apps more vulnerable to something that regular browsers are expecting users to do i.e. behaving unexpectedly? For instance, if an app is performing canned requests to a single specific site, can someone take advantage of this behavior to subvert the app or the site?

    I set out to work on this and created an environment to sniff all traffic coming to and from mobile applications both in Android and iPhone/iPad environments. I tested lots of apps, checking their traffic and looking for something that might be exploitable and I did find things.

    There is one particular resource management game a friend recommended that apparently is very popular. It turns out this game rewards players with a prize every weekend. They don’t do that anymore and it might be my fault, although I never contacted them directly. The way this worked is as follows: if you liked the game’s manufacturer on Facebook, they would share with you a certain password that would unlock resources only during a particular weekend. They’d change the password every weekend and it would only work during those two days.

    The way this was implemented was by means of a plain HTTP request ‘here is the password’ and response ‘here is 10 gold and 10 food for you, thank you’. In other words, you could alter the state of the game from the outside by means of a simple, unencrypted HTTP request. Spoofing the server to point to a local server in the lab setup and getting a fake response of ‘here is 10000 gold and 1000 food’ was, of course, trivial. If ever I’ll see my friend again and show him my progress, he’s going to be quite amazed.

    The bottom line of this mini-attack was clear: unprotected canned HTTP is not trustworthy.

    There were other problems in different applications, like a fitness application that sells exercise routines to be displayed in your Android device. The routines consist of exercise descriptions, images, trainer explanations with instructions, timings and order of each routine. All these materials are contained in an XML file with links to each image and sound file. Yes, you guessed it, the XML was downloaded in plain-text, unencrypted HTTP traffic. I could have tried to access the paid content by guessing the XML file names but I didn’t try.

    I saw some others but my problem with the whole project came when I had to put my findings in writing. The problems I found were so heterogeneous that it was difficult to differentiate this project from any web application pentest. So the clients are mobile applications but I was really looking at somebody’s unsecure web code. At the end of the day, my starting axiom was too true: all mobile apps are essentially web clients therefore they are as unsecure as a browser and that’s how you should treat them.

    A failed project, I hear you say? Perhaps, but it opened my eyes with respect to how we – and app developers – need to treat mobile applications as untrusted browsers-in-a-box.


    Coming Soon: The TrendLabs Security Intelligence Blog will be the new Malware Blog

     
    Posted in Mobile | Comments Off



    The Police Trojan has been targeting European users for about a year. It should come as no surprise that the latest incarnations of this obnoxious malware have started targeting the United States and Canada.

    In the latest batch of C&C servers we have analyzed, not only has the list of countries increased but also their targets are now more specific. For instance, UKash vouchers are not available in the U.S., thus the U.S. fake police notification that spoofs the Computer Crime & Intellectual Property Section of the U.S. Department of Justice, only mentions PaySafeCard as the accepted payment method. The criminals also took the time in adding plenty of logos of local supermarkets and chain stores where the cash vouchers are available.

    Beyond the facade of this criminal attack, we know there is a Russian-speaking gang, which we theorized in our last paper, that had a link to the new Gamarue worm making the rounds in recent months. We can now add another compelling link: the fake police domain worldinternetpolice.net announced by the Trojan, has the same registrar as the confirmed Gamarue worm C&C server photoshopstudy10.in. The first time a researcher sees such a link, it might just be pure coincidence. The second and third times, the link starts to solidify.

    What is becoming crystal clear is that the same Eastern European criminal gangs who were behind the fake antivirus boom are now turning to the Police Trojan strategy. We believe this is a malware landscape change and not a single gang attacking in a novel way. We also found C&C consoles that suggest a high level of development and possible reselling of the server back-end software used to manage these attacks. Police Trojan attacks are here to stay – until they are done milking this cow and have to look for a fatter one, that is.

    You can read our full report on the Police Trojan in Security and Intelligence section of the Trend Micro website.

     



    In recent months European internet users have been plagued by so called Police Trojans that lock their computer completely until they pay a fine of 100 euros. Yes, a fine: it does its threats by posing as the police forces of the victim’s particular country and in the victim’s language. This bullying strategy seems to be paying off because there’s no shortage of infections in the European countries affected by this Trojan.

    We’ve taken a deeper look into the inner workings of this Trojan as well as the network infrastructure that its owners are using to control and receive the payments. We found ties with different malware campaigns dating back to 2010, from Zeus and CARBERP to a fairly recent newcomer to the malware scene called the Gamarue worm.

    The same people peddling this Trojan are also heavily involved in other malware and are very invested in this business. For instance, we have found that they were affiliates of the DNSChanger Trojan program called Nelicash that Rove Digital was sponsoring for a few years. The main persons behind Rove Digital were arrested on November 8 2011 after a two year investigation by the FBI, the NASA Office of the Inspector General and Estonian police in collaboration with Trend Micro and other industry partners. So we might have found an important clue who is behind the police Trojan.

    These criminals are in it professionally and will continue to be because of how much money they are able to make. This is a perfect example of one such group that has found a way of extorting money out of unsuspecting Internet users. We have written an extensive report on the Trojan and the people behind which you can download to get the full picture of this criminal organization.

    Click for larger view

     



    Concerns about privacy on the Internet have always been out there, but news events of late seem to be bringing this problem more and more into the public eye.

    Earlier this month, Google began implementing its “new” privacy policy – despite opposition from many parties, including French and European Union regulators. The new privacy policy allows Google to consolidate what it knows about users across all of its services, something it had never done before. According to Google, this makes for a “simpler, more intuitive Google experience.”

    It’s not just search engines themselves falling under watch for privacy problems. Early in February, the popular Path and Hipster apps were discovered to be uploading user address books to their servers. Later on, it was discovered that both iOS and Android suffered from problems that allowed apps access to user photos even if they had not granted that particular permission.

    So far, there really hasn’t been a good set of guidelines that companies holding our data could be held accountable to and asked to follow. Essentially, companies with access to our private data were left to their own devices when it came to treating that data – with predictable consequences to our privacy.

    In February, it was announced that many advertising networks and leading Internet companies such as AOL, Google, Microsoft, and Yahoo have all agreed to implement the Do Not Track feature: essentially, it stops websites (and advertising networks) from tracking users. This blocks certain practices used by advertisers, such as personalized advertising.  (We discussed personalized advertising earlier on our ebook Be Privy to Online Privacy.)

    This was in line with a White House blueprint for what it called a “Consumer Privacy Bill of Rights”. The set of principles that the white paper includes are all sound and, frankly, common sense: they give user’s online data the same set of protections that they should have offline. Fundamentally, the US approach calls for Internet companies and industries to voluntarily adopt regulations which are then enforced by regulatory agencies.

    Does this mean that users no longer have to worry about their privacy, that advertisers and website owners will no longer abuse what they know about users? Sadly, that is far from being the case

    The Do Not Track announcement was not about anything that could be immediately implemented. How Do Not Track will actually be implemented – and thus, whether it actually works – is not yet entirely clear. In short, it will take some time for Do Not Track to actually be something that users can turn on.

    What these steps do mean is that regulators are finally paying attention to privacy as an issue, and companies are realizing that they have to start paying some attention, instead of just issuing blanket statements that said nothing. European privacy regulators have already launched a probe into Google’s new privacy policy. As a result of a settlement with California authorities, app store operators like Apple and Google have agreed in principle to make app developers include privacy policies if their apps gather user information.

    User concern about tracking and personal privacy is very real. A Pew Research poll found that almost two-thirds of American search engine users disapproved of personalized search results. A similar number had negative views on targeted advertising. A separate study by the University of Queensland found similar attitudes among Australian users. Clearly, users have serious concerns about what kind of information is gathered about them, and how this information is being used.

    The debate over privacy in the digital age will, no doubt, continue. Different people will have different standards for what they consider the acceptable trade-off between convenience and privacy is. Users should be free, however, to make that decision for themselves – and to have the information and tools to decide where their data will end up going.

     
    Posted in Data, Mobile | Comments Off



    Last month, Google announced that they were making search more secure for their users. They announced that users already signed in to Google would have a more secure experience. This meant two things: first, search queries and results would now be sent via HTTPS. This protects the searches of users with unsecured Internet connections, such as most WiFi hotspots.

    The second part was far more interesting. According to our tests, Google does not include the search terms used to reach websites anymore in the HTTP referrer header. Here’s part of the URL that Google is now sending as the referring URL:

    Note that after the &q= portion, no search term is specified. By contrast, a standard search has a referring URL more like this:

    The repercussions are twofold. First, legitimate web sites won’t be able to point out what terms they use are popular. Thus, their own optimization efforts might be impeded. I know that as a web site owner, it’s really useful to have those stats and be able to tune your content so that it’s more easily searchable. To get this information, you now have to sign up for Google’s own analytics services–which may or may not be feasible for all websites.

    Read the rest of this entry »

     


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice