Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:

  • Recent Posts

  • Calendar

    September 2014
    S M T W T F S
    « Aug    
     123456
    78910111213
    14151617181920
    21222324252627
    282930  
  • About Us
    TrendLabs Security Intelligence Blog(breadcrumbs are unavailable)

    Author Archive - Raimund Genes (Chief Technology Officer)




    It is an interesting time to be in IT security today. PRISM and Edward Snowden taught many lessons about how companies should secure their data. There’s been a lot of discussion about the surveillance aspect of this, but consider this whole affair from the side of the NSA.

    To the NSA, this was a data breach of unprecedented proportions. All indications are that Snowden was able to exfiltrate a significant amount of classified data; what has been published so far represents a relatively small portion of what he was able to access. Consider that Snowden technically wasn’t even an employee – he was a contractor. How did he do this? How could a contractor access this much information?

    Some companies may think – “if it can happen to a spy agency, there’s nothing we could do. We should just give up and not protect our data anymore.” Others may say: “let’s build a bigger wall around our data.” Both approaches are incorrect. Obviously, you have to protect your data. However, neither can enterprises just try and protect everything with the same rigor. A truly determined attacker can get in if he wants to get in.

    What an enterprise needs to focus on is what really needs to be protected. Which sets of data, if stolen, can ruin a business? Are they the trade secrets? Or maybe customer data? This will differ for each company – what may be vital for one organization may be trivial for another. Each organization has to decide for itself. Some examples of what a company can consider core data would be: trade secrets, research and development documents, and partner information. Each of these would represent millions of dollars in losses, not just in monetary terms, but in trust and confidence as well.

    Once these core data have been selected and identified, the next step is: defend these strongly. How? That would depend on what the data is, how it is stored, and who needs to access it. Is it something that can be locked in a vault and kept offline for years on end, or is it something that needs to be accessed on a daily basis? For each organization, the challenges will be different, and so will the solutions.

    We must not forget one other component of security: end users. Difficult as it is, end users should be educated to not fall for simple scams. Examples include, “If the administrator asks you for your user credential and password, maybe you should ask another one instead. If you receive an email, which sounds too good to be true, don’t click on it.”

    All in all, it’s a combination of identifying what’s most important, deploying the right technologies, and educating users. It is everybody’s job – not just those of IT professionals – to ensure that the company’s core data stays safe.

     

    For more details on various targeted attacks, as well as best practices for enterprises, you may visit our Threat Intelligence Resources on Targeted Attacks.

     
    Posted in CTO Insights, Targeted Attacks | Comments Off



    There is no doubt that mobile banking is going to become very significant in 2014, if it isn’t already. In the United States, a quarter of all people selecting a bank say mobile banking is a “must-have”. In parts of the developing world, mobile banking is even the dominant form of banking. There is no doubt anymore that mobile banking is a big part of the banking landscape – which means, of course, that it is bound to become a big part of the threat landscape as well.

    In the past, smartphones were generally used to help protect normal online banking transactions. Banks would send users a Transaction Authorization Number (TAN) via SMS that they would have to enter on their PCs to verify that a transaction was valid. It’s essentially a form of two-factor authorization that improves security by providing a second means of authentication for users.

    However, in mobile banking, this second form of authentication is usually not present. This leaves users just as open to banking threats as they were elsewhere without a TAN in use: malware on the mobile device can act as a man-in-the-middle Trojan and carry out information theft as easily as they would on other platforms. This is something we explicitly talked about in our predictions for 2014.

    So, what can you do to help protect yourself? I discuss that topic in the video below.

     
    Posted in CTO Insights, Mobile | Comments Off



    The past year has been an interesting one in the world of cyber security. Mobile malware has become a large-scale threat, government surveillance has users asking “does privacy still exist?”, cybercrime continues to steal money from individuals and businesses, and new targets for hackers like AIS and SCADA have been identified. 2013 was many things, but boring was not one of them.

    So, what do we have to look forward to in 2014 and beyond?

    We expect mobile malware to not just keep growing, but to indirectly affect other platforms and devices as well. What do we mean? Consider how we’re using our smartphones not just for banking, but for authentication (using either apps or text messages). It’s a logical step forward that cybercriminals will systematically go after these as well. 2014 will be about mobile banking. Two-factor authentication is not a cure all – while it can improve IT security, it also introduces new attack vectors that have to be considered and make secure as well.

    Mobile was the “next big thing” a few years ago. What about today’s “next big thing”, the Internet of Everything? Attackers and cybercriminals always go where the money – and the users – are. In the absence of a “killer app” that will get most users to welcome it with open arms, the Internet of Everything is probably not going to see much in the way of threats – for now.

    What is going to see threats are old systems – specifically those running Windows XP. By the time Microsoft stops supporting Windows XP next year, more than twelve and a half years will have passed since it was released. In the world of technology, that is an eternity. Unfortunately, however, many businesses are still using Windows XP. Once the patches stop being released, they will have no protection from Microsoft against zero-day exploits. We just saw a new zero-day target only Windows XP and Server 2003; there are certainly more that haven’t been used or discovered yet.

    Working with other Trend Micro researchers and analysts, we’ve put together our look at the threat landscape for 2014 and beyond, titled Blurring Boundaries. There are many interesting developments going on today; but these come with their own risks that we must all be aware of.

     
    Posted in CTO Insights | Comments Off



    At the end of May, two Google security engineers announced Mountain View’s new policy regarding zero-day bugs and disclosure. They strongly suggested that information about zero-day exploits currently in the wild should be released no more than seven days after the vendor has been notified. Ideally, the notification or patch should come from the vendor, but they also indicated that researchers should release the details themselves if the vendor was not forthcoming.

    This is a pretty aggressive goal. Microsoft, for example, does not set public deadlines for critical patches: a balance needs to be found between quality and speed. Balancing the two is not that easy. On one hand, an actively exploited vulnerability will be used by malware writers aware of the vendor’s vulnerability. On the other hand, a quick patch could have negative side effects and could cripple the application or the entire system.

    Almost every security vendor has false positives that result in negative side effects. We have lots of safeguards in place – such as checking against whitelists – but Murphy’s law applies. Our industry has crippled the computers of users. What more an operating system vendor – they need to be extra careful in patching. They need to conduct proper quality assurance, as the patch will affect millions of computers.

    In my opinion, disclosure after 7 days is reasonable and OK, but expecting a patch in this time frame is unreasonable. Let’s see how Google itself will manage. Currently, there’s a Google Android Trojan spreading which is able to hide itself from the “Device Administrator”, which renders it invisible from security programs and clean up attempts. This was possible because of a security flaw in Android. Will Google be able to fix this within 7 days? Let’s see…

    The bigger debate is not just about how long it should take to report and fix vulnerabilities. It’s about how vulnerabilities should be reported in the first place. If you believe a recent media report, the US government is now the biggest purchaser of malware. How do we ensure that the affected vendors are informed, that these are not used for offensive uses like Stuxnet? How do we ensure that these same vulnerabilities don’t end up in the hands of the underground, which will use these threats widely?

    What needs to take place is a bigger discussion around how discovered vulnerabilities are dealt with at all. These should be between all those involved in this field – developers, governments, and researchers – to determine how we can deal with security vulnerabilities in the future.

     



    Last month, an article in Dark Reading by Robert Lemos asked if it was “Time To Dump Antivirus As Endpoint Protection?“. It referenced a recent Google research paper that outlined their new reputation technology called CAMP (short for Content-Agnostic Malware Protection), which they claim protects against 98.6% of malware downloaded via their Chrome browser, as opposed to the 25 percent detected by the best performing antivirus engine they tested.

    This may sound like magic. Whether you view this as white magic or black magic depends on if you know that Google sends attributes of all the “unknown” files on your computer to their online service for analysis.

    To us, however, all this is old news. As early as 2008 we stated that standard detection technologies need to be combined with other methods like reputation services, whitelisting and so on. We’ve invested heavily in the technology needed to detect malicious infrastructures and ecosystems.

    Because we collect so much information, we’re able to help law enforcement agencies around the globe put cybercriminals in jail. When Google talked about CAMP, they claimed that their system makes millions of reputation-based decisions every day, and that it identifies and blocks about 5 million malware downloads every month. Great job to make the Internet a safer place!

    Then again, this is what the security industry has done for years already, so this is not something brand new. For example, Trend Micro blocks 250 million threats per day (files, websites, and spam), and our systems process more than 16 billion requests per day. These requests generate 6TB of data daily for analytics… that’s what I call Big Data.

    The article also talks about what some customers are doing in the face of the problems of traditional antivirus. These are:

    Abandon Antivirus

    The author himself states this is a bad idea, pointing to the recent Microsoft Security Intelligence Report that said that computers with no endpoint anti-malware protection were 5.5 times more likely to be infected. It’s all well and good that, say, Google Chrome protects me from infections – but what about the latest driver on a CD that I need to install my USB 3.0 PCI card? What about the USB stick I just got from a friend? What about the digital picture frame with malware on it? Let’s not even talk about all the other entry vectors using vulnerabilities and so on. Endpoint protection is still necessary, and a baseline for effective defense.

    Beef up the blacklist

    We’ve been saying this for years as well. A blacklist can be combined with reputation technology, advanced heuristics, communications monitoring to identify commands by malicious botnets, and all the other new tools we have up our sleeves. Users have to accept that especially a sufficiently determined and sophisticated targeted attack will be able to get in, but there are ways to detect these threats, particularly when they try to “phone home” to malicious servers.

    Use a whitelist

    Yes! Trend Micro has built up a whitelist with over 220 million known good files, and we use it as part of our reputation services within our products. This can be used in critical endpoints to minimize the risk of running malware as well.

    Focus on isolation

    This makes sense for critical machines where you have the time and money to manage them in a different way. The users of these machines will have to learn that they can’t execute code from all kind of sources anymore, they need to say goodbye to their personal computer. I see the use cases here in hardening industrial control systems and Windows systems in production environments.

    I totally agree that it is easy to avoid traditional antivirus. However, the security industry has known this for quite a while now and have worked hard to find new ways to protect against malware and cybercrime. Do we do a perfect job? No, there is no silver bullet. Our job is to protect our users as best as possible, and that’s what we continue to do. So long live anti-malware – it still is needed.

    For users who want to know more about this issue, I came up with a video discussing how anti-malware products are still relevant and crucial in protecting users’ data amidst developments in reputation technology. 

     

     

    My website CTO Insights contains more discussions about pertinent security issues.

     
    Posted in Bad Sites, CTO Insights, Malware | Comments Off


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice