Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:


  • Recent Posts

  • Calendar

    July 2015
    S M T W T F S
    « Jun    
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  
  • Email Subscription

  • About Us


    Author Archive - Martin Roesler




    Last time, I talked about how attackers are at an advantage when it comes to targeted attacks, and how it is important that we accept that fact in order to deal with attacks properly. Here comes the hard part: knowing that attackers have a great level of control, what do we do now?

    Remember that even though we’ve come to accept that attackers have greater control, does not mean that we don’t have any of it. We do, and it is important to take note of that because using that control is highly critical in dealing with targeted attacks.

    Control the Perimeter

    Of course, any form of control can only be truly successful if founded on an awareness of what we truly own. Acquiring a firm grasp of what and who gets access to the network and the level of access that is provided may come at the expense of what most employees see as convenient, but considering the dangers of targeted attacks, it is important to put security first.

    Part of identifying the network is also having a deep understanding of it, specifically the operations, processes, events, and behavior we consider normal. Knowledge of what is truly normal and what is not will help identify anomalies better and faster.

    Once the network is defined, it is critical to have a means to monitor the network, which means having visibility and control of everything that goes in and out of it. A good example of a technology that can help network administrators do this is DNS Response Policy Zone. DNS RPZ provides a scalable means to manage connections to and from the network. If complemented with a domain name blacklist, it would create a network environment that is significantly safer.

    Deploy Inside-Out Protection

    Traditional defenses focus on hardening firewalls and keeping bad components out through blacklisting. Now, while this “outside-in” strategy would be effective for dealing with fairly straightforward attacks, it would be utterly unreliable against targeted attacks. Traditional defenses are made for attacks where the form and source are easily recognizable, which is not the case for targeted attacks.

    Figure 1. Traditional defense

    Read the rest of this entry »

     
    Posted in Targeted Attacks | Comments Off on Understanding Targeted Attacks: How Do We Defend Ourselves?



    Most of the things our industry has learned about targeted attacks were realized the hard way: through analysis of successful attacks. Our realizations have so far revealed just how unfamiliar we are with the “battle ground” we are currently in, and how that unfamiliarity has caused the industry to be unable to understand what is needed to deal with such attacks. But why is this so? Do the attackers really have the upper hand? The answer, unfortunately, is yes.

    Unfair Advantage

    To put it simply, attackers have a greater level of control and a wider range of resources. They get to decide on the very nature of the threat — how and when the attack will play out. They can employ the use of the numerous tools available on the Internet, including legitimate services. More importantly, they can get intelligence on what they are up against – they can do research on the target and find information that can make infiltration easy and almost undetectable.

    And while attackers are able to utilize such flexibility, targets, on the other hand, are faced with multiple limitations that even by themselves are already difficult to manage. With the dawn of consumerization and rise of mobile computing, it is already a big struggle for companies to identify their own network, even more so to protect it. They can only do so within the limitations of available strategies, whatever control they have over the network, and the awareness of their people.

    Read the rest of this entry »

     
    Posted in Targeted Attacks | Comments Off on Understanding Targeted Attacks: What Are We Really Up Against?



    Google recently removed websites under the .CO.CC second-level domain (SLD) from its search engine’s results. As a means to protect users, we do not think this is a good solution.

    Based on our research and monitoring of malicious domains and cybercrime activity, we know for a fact that all major cybercriminals have already moved from *.co.cc to other similarly abused SLDs like *.rr.nu or *.co.tv. This abuse of rogue SLDs is excessive and is rapidly escalating. Cybercriminals routinely jump from one SLD to another in order to keep their FAKEAV via blackhat search engine optimization (SEO) schemes alive, among other Web-based attacks.

    The following list of the number of malicious URLs we found on certain SLDs suggests why blocking *.co.cc domains is a short-term band-aid solution:

    In addition, if we chart the typical infection chain for the majority of blackhat SEO attacks nowadays, you will notice that the malicious SLDs are more often used for the second, third, up to the fourth jumps or redirections. The doorway pages—those that are actually indexed by search engines—very rarely use *.co.cc. So, blocking these makes no sense.

    Read the rest of this entry »

     



    Yesterday, I read an article that reported how our counterparts at Sophos “slammed Microsoft” over its reported malware blocking stats for SmartScreen® Application Reputation built-in Internet Explorer (IE) 7, 8, and 9.

    This issue was much too interesting for me to not follow up with my own thoughts.

    Having also read the Microsoft blog article as well as media reports, I was enticed to run a few checks.

    I took a look at Trend Micro’s own internal competitive benchmarking results. As you can see from the chart below, of those companies whose products we tested against, the security company closest to Trend Micro’s own blocking rate was, in fact, Kaspersky.

    In our test, IE9 achieved a less than 10 percent success rate for malicious URL blocking. So, while we cannot comment on the exact methodology used in Microsoft’s own tests, we have to agree with Sophos’ questioning of the rather surprising results Microsoft published.

    null 

    Note: Internal benchmarking results (Figure 1) updated to include additional company (May 25, 9:07 PM UTC-7).

    Read the rest of this entry »

     


    Dec29
    5:00 am (UTC-7)   |    by

    As 2010 comes to a close, here’s a list of the riskiest items we encountered in the past year:

    • Hardware: The riskiest hardware device used in 2010 was the German identification card reader. These cards contain encoded private information such as fingerprints. Unfortunately, the information on them can be quite easily stolen by using certain card readers.
    • Website software: The riskiest software used by websites in 2010 was the popular blogging platform WordPress. Tens of thousands of unpatched WordPress blogs were used by cybercriminals for various schemes, primarily as part of redirection chains that led to various malware attacks or other blackhat search engine optimization (SEO)-related schemes.
    • IP: The most dangerous IP used in 2010 was Internet Relay Chat (IRC). Thirty percent of all botnets used IRC to communicate with infected machines and their command-and-control (C&C) servers. Fortunately, blocking IRC use in networks reliably stops botnets.
    • OS: The riskiest OS used was Apple’s Mac OS X. In November, Apple sent users a massive maintenance release that weighed in at at least 644.48MB. The weighty upgrade included fixes for multiple security vulnerabilities since the previous update released in mid-June. Apple’s penchant for secrecy and longer patch cycles also increased the risk for users.
    • Website: The most dangerous website in the world was Google. Its tremendous popularity led cybercriminals to target it specifically for blackhat SEO-related schemes, which in turn, led users to significant malware threats, particularly FAKEAV. In addition, Google’s ad network was also frequently victimized by malvertisements.
    • Social network: In another case wherein popularity led to danger, Facebook could be considered the most dangerous social networking site around. Everything from survey scams to KOOBFACE malware proliferation ensued on the site, as cybercriminals went where the people were, that is, Facebook.
    • Top-level domain: The most dangerous top-level domain in the world was CO.CC, which allowed cybercriminals to register thousands of domains on the fly with very little in the way of verification. This, along with Russian ISPs that routinely refused to shut down malicious sites, made for a very dangerous combination.
    • File format: PDF was the riskiest file format in 2010, as Adobe Acrobat and Reader vulnerabilities routinely became part of exploit toolkits.
    • Runtime environment: The most dangerous runtime environment for users in 2010 was Internet Explorer (IE) with scripting enabled. Even today, most browser exploits specifically target IE. However, Java is quickly becoming a more prominent target and could become the prime target in 2011.
    • Infection channel: The most common infection channel was still the browser, as more than two-thirds of all infections used this as infection vector. Previous infection methods like flash disks and spammed messages were still around but were less prominent than before.
     


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice