Trend Micro Facebook TrendLabs Twitter Malware Blog RSS Feed You Tube - Trend Micro
Search our blog:


  • Mobile Vulnerabilities

  • Zero-Day Alerts

  • Recent Posts

  • Calendar

    August 2015
    S M T W T F S
    « Jul    
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Email Subscription

  • About Us


    Author Archive - David Sancho (Senior Threat Researcher)




    This is the second post in the “FuTuRology” project, a blog series where the Trend Micro Forward-Looking Threat Research (FTR) team predicts the future of popular technologies. Make sure to check the first entry of the series for a brief introduction on the project.

    Predicting the future sucks. It does because we are never right. If what we say does not come to pass, we look bad. On the other hand, if what we say happens, naysayers will say that it’s because the attackers read the blog too and we gave them ideas—the self-fulfilling prophecy.  We’re damned if we do and damned if we don’t. That is how it is for infosec fortune tellers.

    Today’s topic is very exciting: healthcare technology.

    Let me start by mentioning a technology we have at present: fitness wearables. These little devices are already in mass production and selling like hotcakes, just look at vendors like Fitbit or Jawbone®. Last February’s Mobile World Congress at Barcelona saw us trying out device after device in all kinds of flavors. Some vendors were already aiming at niche markets, like pets and children, but don’t let me stray from the topic at hand. These wearables, when strapped on our bodies, can count the steps we take and our heartbeats per minute, then estimate the calories we burn. Their companion mobile apps let us log weight changes and our food intakes to better estimate our calories and compare how much we are over/under-eating. All this information is uploaded to the vendor’s cloud so they can show us pretty colored graphs. So far, so good.

    Let us look at the near future. Judging by some crowdfunded projects and rising public interest, something big is coming soon. I’m talking about devices that measure health parameters at will and upload data to the vendor’s cloud. What kinds of data? There’s body temperature, blood pressure, blood oxygen levels, heart rate, respiratory rate, electrocardiogram (EKG or ECG), and others like them. Once the data is uploaded, the server algorithmically analyzes whether those values are normal or a bit off based on your personal historical data. This technology promises to know whether you’re going to become sick before you actually do or even notice that you might. Awesome tech!

    The similarities between fitness wearables and smart medical devices are pretty obvious. Will we see an amalgamation of both at some point in the future on the same device? It’s hard to say but it’s probable. I’d venture to say it’s even more likely that we’d see health data correlation at a scale never seen before. From the medical point of view, mining both data sets is huge. The fitness data provides us the actions, while the medical data provides the effects. Does a higher intake of bananas start a metabolic effect that makes us sick after two months? Does this only happen in certain regions? Or perhaps only in populations of a certain age? Even simpler than that, when does a flu outbreak start in the world and how does its impact change based on your activity level? How cool is this?

    Once we have this technology implemented on a massive scale, we can use it for different things too. How about enabling remote doctor house calls, where the doctor can triage patients based on their current medical data? This could lower medical costs quite a bit. Perhaps we can use these doctor visits in remote locations in case someone can’t physically make it in time for the diagnosis. More interesting than that, outbreak specialists can virtually be anywhere in the world, taking body measurements remotely and studying the geographical impact of a spreading disease from their home laboratories.

    Affordable DNA sequencing is another healthcare technology that might have great impact on all of us in the future. When the cost of this technology gets low enough, we can all get tested to see what kinds of illnesses we are prone to. We may also prepare by making lifestyle and dietary changes to avoid them. This can be a big change in the healthcare field.

    I’ll let you, the reader, create attack scenarios for each of those. Possible attack vectors don’t always focus on stealing money or credentials (to enable the much-touted ‘identity theft’ or even user extortion) from users. The most likely kinds of threats we expect to see – at least initially – would be focused on privacy. If we start uploading medical data to the cloud, we upload very confidential information to places we do not control. Privacy is clearly a concern as would be any denial of service (DDoS) attack on remote medical services or data tampering of medical diagnosis at any point.

    It’s difficult to be more concrete than this at this point or we run the risk of plotting a science-fiction movie very quickly. Remember that, in any new technology with a lot of moving pieces, poking the engine with a stick will cause gears to fly. These technologies are still in the works and are not even remotely ready yet. At this point, these future healthcare scenarios are purely hypothetical, but still, the intellectual exercise of attacking castles in the air is always interesting, isn’t it?

     
    Posted in Internet of Things |



    How do you think will the threat landscape evolve in the next two years? Three years?

    One of the most exciting aspects of belonging to a research group like the Trend Micro Forward-Looking Threat Research (FTR) team is practicing the intellectual exercise that is predicting the future. We can’t know what will happen but, with the data we have in hand, we can make educated guesses. We can predict the future. This intellectual exercise is what we refer to as the “FuTuRology” project.

    “FuTuRology” started as a thought exercise that tries to tackle the future of the healthcare industry and how it might evolve to be susceptible to attacks. We realize that this exercise can also be applied to other critical industries like transportation, science and technology, agriculture, and more.

    This is the first in a series of “Futurology” blog posts tackling the business of predicting threats in popular technologies. We will go deeper into the healthcare subject in our next entry but today we will focus on the prediction business.

    Sounds good? Let’s go for it.

    Prediction and “Black Swans”

    To predict future attacks, we need to know how existing technologies are going to evolve. We know that the bad guys will go wherever the users are. In the Trend Micro security predictions for 2015 and beyond, for instance, we predicted how cybercriminals will uncover more mobile vulnerabilities based on their present interest in the platform. Halfway into the year and this has proven to be true with the emergence of the Samsung SwiftKey, Apache Cordova, and other vulnerabilities.

    We have lots of hints and clues as to what is happening in the information security world, but sometimes, “black swans” manage to surprise us. By definition ‘black swans’ are unpredictable events so groundbreaking that, when they happen, cause an unexpected quake in our particular framing of the world and shake its very intellectual foundations. Black swans happen every now and again and make us think—uselessly—how we could have predicted they were going to happen (hint: we couldn’t have).

    Let me give you some examples. There are the zero-day vulnerability worm attacks that plagued the net in 2003. We had known about vulnerability exploitation for many years before then but we still were surprised by Blaster in August 2003. How about the motivation shift from home-brewed academic viruses to professional crimeware back in 2001 and 2002? We could have predicted it. One could argue that we should have predicted it and yet, we were all caught by surprise by the new paradigm.

    A Closer Look at Motivations

    Speaking of motivations, it’s a basic fact that defense follows attack. We need to look at attackers and their motivations in order to guess what their next steps will be. These have been fairly stable since 2001 and 2002; it’s mostly cybercriminals trying to make a quick buck from innocent internet users. This is pretty obvious, but we need to look a bit further out. In order to make money, cyber thieves have secondary goals and that’s what we have to look for when trying to predict the future of the threat landscape.

    • Abstract Assets 
      Cybercriminals mostly target user credentials but they also aim at other resources they can get their hands on, such as processing power, bandwidth, stored data, etc. Taking those abstract assets as possible criminal targets allows us to consider new attack avenues we haven’t even seen yet. How about targeting specifically the victim computer’s processing power? We have seen this happen with Bitcoin mining but there are more possibilities, from crowdsourced brute-force decryption services to prime-number derivation at a massive scale or even selling computing power as a service. These are just examples, of course.
    • Hacktivism 
      And all this is for moneymaking attackers but we have also hacktivists (seeking destruction for political reasons or ethical disagreements) and attacks between organizations (trying to get a strategic or tactical advantage over their perceived enemies). In these two cases we have cyber-terrorism and cyber-war – as the media likes to call them.
    • Undefined 
      In addition to those, there are all sorts of motivations that aren’t so clear-cut. We’ve seen journalists using hacking techniques to get their stories, politicians attacking their opponents’ infrastructures and ruling political parties polluting the twitter feeds of their political opposition. In short, monitoring attackers’ motivation drivers is a prediction field on its own.

    The Future of InfoSec

    The current attitude is worrying in that hacking seems to be viewed now as fair game. The moment that malware-writing becomes mainstream and not just underground merchandise, we can expect a new trend of malware attacks with all sorts of new adversaries playing this increasingly-muddled game. Then, predicting where the next threat might be coming from will become much more important.

    Isn’t this exciting or what?

    With “black swans” and motivations in mind, we will keep looking inside our particular crystal ball to predict the future of technologies and how they will be attacked. For the next blog post, we will look into the future of the healthcare industry and how it might evolve to be susceptible to attacks.

     
    Posted in Internet of Things |



    End-of-life fun times are coming to infosec departments everywhere—again.

    Just a year after the announcement of Windows XP’s end-of-life, we see another body in the OS graveyard: Windows Server 2003. After July 14th, servers running this venerable OS will no longer be receiving any more security updates. This would leave you out in the cold pretty soon, given the speed that new vulnerabilities are being published lately.

    Who’d want to be in such a position? According to a survey conducted by Spiceworks, 37% of the companies surveyed hadn’t migrated to a newer OS. Looking further into this statistic reveals a scarier side: of those companies, 8% had no plans to migrate at all and 1% did not know anything about plans for migration. The majority of the companies (48%) were partially migrated; only 15% have fully migrated.

    From those companies which were looking to finish their transitions, 25% planned to finish it at some point after July 14th. This includes another group that planned to complete their migration “at some point.” From a security standpoint, these are not the answers we were hoping for.

    The most common reason given for delaying migration was that “the system is still working or there’s no immediate need to migrate.” But organizations need to wake up to the fact that most attackers are not interested in their companies per se (although some of them probably are). Most cyber attackers try to get the lowest-hanging fruit by means of the least effort massive attacks, and guess what? Your fruit will be hanging much lower if you still have an ancient, out-of-support OS after July 14th. Our primer, Pulling the Plug on Windows Server 2003: Can You Still Manage Your Legacy Systems?, discusses the risks of discontinued support.

    Unlike migrating from Windows XP, migrating from Windows Server 2003 can be more challenging because we’re talking about servers this time, which means computers that cannot be easily be rebooted or turned off (let alone be down for a significant amount of upgrade time). Old server-side applications might not have been tested with more modern operating systems and upgrade might not be possible at all. However, we do recommend upgrading or moving to a virtual environment whenever it’s possible.

    The survey also had some bright spots. A significant percent of those migrations would go to virtualized environments, which are more easily defendable. The added layer called the hypervisor (the host OS in the virtual environment) can act as a moat against threats.

    So do we encourage companies to accelerate the rate of their migrations to newer server OSs? Yes, definitely! Would you want to be exposed to the new vulnerability of the day? We bet it will come shortly after July 15th. If you have trouble to stick to that timeline, sort those problems out as soon as possible. If the survey is any indication, most companies already have the licenses/cloud providers lined up; most mention it’s a matter of time or budget constraints. This is the classical security conundrum: more security or more convenience/cheaper/faster things? Bear in mind that convenience/price/speed can be quickly offset by a bad breach or attack. In my opinion, the decision is crystal clear.

    The Vulnerability Protection in our Smart Protection Suites can provide defense against exploits, although we still strongly recommend migration to minimize the attack surface. Companies worried about the costs and operational issues when it comes to emergency patches and system downtime can get immediate protection with the help of virtual patching technologies, such as the solutions found in Trend Micro Deep Security. Virtual patching minimizes exposure gaps, protecting users of Windows Server 2003 from exploits as they migrate to a newer platform.

    Read more about how virtual patching helps companies dodge compromise in our infographic, “Dodging a Compromise: A Peek at Exposure Gaps.”

     



    The past week has seen some interesting news in the world of online security. First, the US government announced that all websites maintained by federal agencies must be using HTTPS by the end of 2016. Second, the Wikimedia Foundation (best known for Wikipedia) announced that they, too, were rolling out mandatory HTTPS for their own sites as well, with full completion expected “within a couple of weeks”.

    With large organizations moving to full-time HTTPS, and browser vendors pushing for it as well (Mozilla has gone on record that eventually, new features will only be for sites running HTTPS), is it time for smaller site owners to make the move as well? Should end users pressure their favorite sites to adopt HTTPS? Will it really help online security?

    The short answer is yes. Site owners should strongly consider enabling HTTPS for their sites. It’s been clear to many people that in the long run, meaningful adoption of HTTPS would increase the security of the Web. Web giants such as Google and Mozilla, plus international bodies like the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C) have all spoken out in favor of this. Sites that are being set up now should use HTTPS from the start. That’s the direction the web is going today, and a site that’s being set up now should be built with the safety and privacy of its users in mind. For existing sites, enabling HTTPS as soon as is practical is something that owners should strongly consider, especially if sensitive data is being handled.

    Another thing to consider for website administrators is that in the long run, search engines may make HTTPS usage part of their ranking algorithms. Google is already doing just that. While for now it’s not a particularly significant “signal”, down the road that could change. It’s a good idea for site owners to get ahead of the curve and move their users to a more secure and private web.

    HTTPS: A Good Start

    It is important to note that while the mandate to adopt HTTPS is a very important step to improve online security, the effort should not stop there. HTTPS is definitely an improvement, but it is not perfect and is far from a cure-all when it comes to online security.

    Think of it as the equivalent of sending a letter in a secure container; a safe, for instance. An attacker could replace the entire container by their very secure but fake safe, steal the letter after it’s opened (steal the decrypted data from the user’s own machine), or send the user a very secure safe with a bomb inside (direct the user to a malicious  site through a secure channel).  HTTPS deals with the issue of security “in transit”, i.e., while the data is being sent across the Internet. It doesn’t solve other problems with online security, and was never meant to.

    These developments to make HTTPS implementation a norm on the Web is definitely a step into the right direction and it is crucial for website owners to follow suit.

     

     
    Posted in Social |



    Steganography will only become more popular, especially among the more industrious malware groups out there. For an attacker, the ability to hide stuff in plain sight is like peanut butter on chocolate: it makes their favorite thing even better.

    In the first two entries of this series, we explored which steganographic techniques are used by attackers to keep malware from being detected, and how they are used to hide command-and-control (C&C) commands, as well as executable code. This time, we’ll discuss the impact of steganography use in the future.

    But first, let me call out several things that came out in the last couple of weeks:

    1. Banking Trojan uses Pinterest as a C&C communication channel – Again, this is not purely steganography, but this is an interesting case since it shows how cybercriminals will try to use common yet unexpected traffic as communication channels. In this case, the banker client is just reading comments in certain pins and decoding them to get the botmaster’s commands.
    2. Janicab Trojan uses YouTube comments as C&C communication channel – This one is self-explanatory. It’s so similar to the previous one that they might even belong to the same criminals, except that this doesn’t have any clear ties with South Korea. Who knows, perhaps steganography is in fashion in the Far East?
    3. Operation Tropic Trooper downloads executable images embedded in image files – This is similar to what we talked about in the last two blog posts of this series, with the main difference being that steganography was used in a targeted attack against organizations sitting in very specific geographical areas. This goes to show that all sorts of attackers are using the very techniques we’ve been discussing here. And the more they do so, the more popular these techniques get. You can get more information on Operation Tropic Trooper in our white paper.

    So when is steganography especially useful for attackers?

    The full impact of steganography use is most felt in targeted attacks. By definition, these kind of attacks aren’t widespread; their specificity may take researchers some time to discover any new techniques used. In a sense, we can think of steganography use in a targeted attack as a zero-day vulnerability: the longer it stays undiscovered, the more useful it is to the attacker. In the case of an actual zero-day vulnerability, it’s also more damaging to the victim.

    Until then, what can we researchers do to try and find new unknown steganographic attacks in the wild?

    The main weakness of steganography in malware is that the “client” software—the part that looks for the data that’s being concealed in unexpected places—is public. And with cybercriminals trying to spread their malware far and wide, it will eventually fall into the hands of good guys like me and other researchers. At that point, it is only a matter of whether or not the researcher knows where and how to look, decode, and decrypt the data. In essence, steganography only helps the malware stay undetected until security researchers get a sample.

    By their very design, these techniques hide stuff in unexpected places. Therefore, we researchers must look in data files and data streams for stuff that’s out of place. Double-checking for protocol messages that are longer than usual, trying to find extraneous encoded data, or even applying machine learning rules to network trace datasets can all be starting points.

    None of this is easy, and we may even receive the malware sample before we spot the hidden data stream. However, if we never try, we will never know. This is my bottom line: searching regular data and trying to find irregularities or anomalies is the only way to beat the attackers. This may not seem doable (or even worth our time) but it’s certainly a good challenge to tackle. Improvements in available algorithms may help in reducing the burden down the road.

    You can check the previous 2 parts of this blog series here:

     
    Posted in Malware |


     

    © Copyright 2013 Trend Micro Inc. All rights reserved. Legal Notice