Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Recent Posts

  • Calendar

    August 2014
    S M T W T F S
    « Jul    
  • About Us
    TrendLabs Security Intelligence Blog(breadcrumbs are unavailable)

    Author Archive - Martin Roesler (Threat Researcher)

    Google recently removed websites under the .CO.CC second-level domain (SLD) from its search engine’s results. As a means to protect users, we do not think this is a good solution.

    Based on our research and monitoring of malicious domains and cybercrime activity, we know for a fact that all major cybercriminals have already moved from * to other similarly abused SLDs like * or * This abuse of rogue SLDs is excessive and is rapidly escalating. Cybercriminals routinely jump from one SLD to another in order to keep their FAKEAV via blackhat search engine optimization (SEO) schemes alive, among other Web-based attacks.

    The following list of the number of malicious URLs we found on certain SLDs suggests why blocking * domains is a short-term band-aid solution:

    In addition, if we chart the typical infection chain for the majority of blackhat SEO attacks nowadays, you will notice that the malicious SLDs are more often used for the second, third, up to the fourth jumps or redirections. The doorway pages—those that are actually indexed by search engines—very rarely use * So, blocking these makes no sense.

    Read the rest of this entry »


    Yesterday, I read an article that reported how our counterparts at Sophos “slammed Microsoft” over its reported malware blocking stats for SmartScreen® Application Reputation built-in Internet Explorer (IE) 7, 8, and 9.

    This issue was much too interesting for me to not follow up with my own thoughts.

    Having also read the Microsoft blog article as well as media reports, I was enticed to run a few checks.

    I took a look at Trend Micro’s own internal competitive benchmarking results. As you can see from the chart below, of those companies whose products we tested against, the security company closest to Trend Micro’s own blocking rate was, in fact, Kaspersky.

    In our test, IE9 achieved a less than 10 percent success rate for malicious URL blocking. So, while we cannot comment on the exact methodology used in Microsoft’s own tests, we have to agree with Sophos’ questioning of the rather surprising results Microsoft published.


    Note: Internal benchmarking results (Figure 1) updated to include additional company (May 25, 9:07 PM UTC-7).

    Read the rest of this entry »


    As 2010 comes to a close, here’s a list of the riskiest items we encountered in the past year:

    • Hardware: The riskiest hardware device used in 2010 was the German identification card reader. These cards contain encoded private information such as fingerprints. Unfortunately, the information on them can be quite easily stolen by using certain card readers.
    • Website software: The riskiest software used by websites in 2010 was the popular blogging platform WordPress. Tens of thousands of unpatched WordPress blogs were used by cybercriminals for various schemes, primarily as part of redirection chains that led to various malware attacks or other blackhat search engine optimization (SEO)-related schemes.
    • IP: The most dangerous IP used in 2010 was Internet Relay Chat (IRC). Thirty percent of all botnets used IRC to communicate with infected machines and their command-and-control (C&C) servers. Fortunately, blocking IRC use in networks reliably stops botnets.
    • OS: The riskiest OS used was Apple’s Mac OS X. In November, Apple sent users a massive maintenance release that weighed in at at least 644.48MB. The weighty upgrade included fixes for multiple security vulnerabilities since the previous update released in mid-June. Apple’s penchant for secrecy and longer patch cycles also increased the risk for users.
    • Website: The most dangerous website in the world was Google. Its tremendous popularity led cybercriminals to target it specifically for blackhat SEO-related schemes, which in turn, led users to significant malware threats, particularly FAKEAV. In addition, Google’s ad network was also frequently victimized by malvertisements.
    • Social network: In another case wherein popularity led to danger, Facebook could be considered the most dangerous social networking site around. Everything from survey scams to KOOBFACE malware proliferation ensued on the site, as cybercriminals went where the people were, that is, Facebook.
    • Top-level domain: The most dangerous top-level domain in the world was CO.CC, which allowed cybercriminals to register thousands of domains on the fly with very little in the way of verification. This, along with Russian ISPs that routinely refused to shut down malicious sites, made for a very dangerous combination.
    • File format: PDF was the riskiest file format in 2010, as Adobe Acrobat and Reader vulnerabilities routinely became part of exploit toolkits.
    • Runtime environment: The most dangerous runtime environment for users in 2010 was Internet Explorer (IE) with scripting enabled. Even today, most browser exploits specifically target IE. However, Java is quickly becoming a more prominent target and could become the prime target in 2011.
    • Infection channel: The most common infection channel was still the browser, as more than two-thirds of all infections used this as infection vector. Previous infection methods like flash disks and spammed messages were still around but were less prominent than before.

    Adobe released some major security updates for its products, particularly Adobe Reader and Acrobat, on all platforms (Win, Mac OS, Linux) and we strongly encourage our readers to install these updates. For details, the Adobe blog is worth reading as well.

    This update is in line with a recent zero-day attack that we also reported earlier this month.

    Adobe PDF was a main target for malware writers during the last months so we are very delighted to see this response from Adobe. We strongly advise users to install these updates as soon as possible.

    Update as of July 3, 2010, 10:27 a.m. (UTC)

    The recent Adobe patch included a fix for the oft-abused /launch vulnerability that when successfully exploited can allow files embedded in .PDF files to be dropped and executed on systems. This feature has now been modified. While the feature was not totally disabled, Adobe has chosen to implement a black list to prevent .EXE files from running on a system by default. (System administrators can choose to re-enable this if they want to.) Details for the fix can be found here.

    However, reports indicate that the current fix does not completely solve the problem, as a proof-of-concept (PoC) code bypassing Adobe’s solution has been released. Adobe has acknowledged this but believes that the current solution still reduces risks of attack.


    Today, I was scanning through various industry blogs when I stumbled upon an entry from Kaspersky Labs.  What was interesting was that under the veil of improving testing quality, the blog openly admitted that the organization in question had been trying to play tricks on competing organizations just to position itself more favorably among the media.

    The organization explained that it deliberately created clean files and added fake detections in order to “show” that other vendors copied it. This was a risky decision. Across the industry, research organizations share a level of trust and participate in sample-sharing programs in order to protect customers, which for Trend Micro, is what always comes first.  (I should just add here, that Trend Micro was not one of those companies affected by this, as we always QA our own detections and never rely on those of another vendor).

    Aside from the organization’s cheap prank, we were very pleased that the other resounding message that came from the blog post was that it finally understood and supported the message Trend Micro has been promoting for a long time now—the need for change in testing methodologies to include real-world testing such as those delivered by NSS Labs.

    The need to change testing methodologies was also a primary reason for the foundation of the Anti-Malware Testing Standards Organization (AMTSO), which aims to come up with more realistic and useful benchmarks.

    This story really shows just how influential the media is on the antivirus industry in that even a respected vendor should manipulate detection rates just so it can positively position itself with the press rather than focus on its customers.

    But another more positive learning is also that the path that AMTSO is taking is the right way. Pure detection rates based on numbers or one-to-one comparisons are yesterday’s methods when verifying the value and performance of a security solution.

    Customers need holistic reviews, giving them a real-world scenario-based feedback about what different solutions can offer them instead of pure “I detect more then you” headlines. I am glad to see that testing organizations like NSS Labs, AV-Comparatives, and AV-Test meanwhile understood and started to pick up these principles.



    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice