The public sector has always been a prime target for cyberattacks, which makes sense considering the volume and type of information that it routinely deals with. Over the years, governments have become increasingly aware of the risks associated with insider misuse, outdated software and new advanced persistent threats.
Breaches and shortfalls still happen, however. The U.S. Veterans Administration, for example, recently failed its cybersecurity audit for the sixteenth consecutive year, although it reduced the number of vulnerabilities identified in 2013 (approximately 6,000) by 21 percent. The U.S. State Department also shut down its unclassified email system in mid November 2014 after observing “activity of concern.”
Neither of these events is a watershed moment in government. All the same, they speak to the challenges that agencies continue to face in staying prepared for rapidly evolving risk environments. As KPMG forensic technology executive Stan Gallo noted in November 2014 remarks, the landscape has changed significantly just in the last six years, so much that a “set and forget” approach is no longer viable.
From one-off exercises to continuous monitoring: Why governments must make the transition
Governments around the world are on high alert about cybersecurity. In Australia, for example, cybercrime costs almost $1 billion annually and the Australian Signals Directorate has reported that significant incidents rose 37 percent between 2012 and 2013. In the U.S., several Cabinet departments have been targeted over the last few years by cybercriminals using automated scanners and SQL injection.
What is the solution? Cybersecurity today takes a multi-pronged yet coherent strategy. Traditional tools such as antivirus software and spam filters must be supplemented as needed with network security tools that can discover and close vulnerabilities like the ones that world governments confront all the time.
In a 2013 white paper, “Continuous Monitoring in a Virtual Environment,” Trend Micro executives J.D. Sherry and Tom Kellermann posited that continuous monitoring was nothing less than the future of cybersecurity. More specifically, they outlined how it would enable government agencies to become more proactive in how they addressed threats, a crucial shift considering the pressure that these bodies are under from both the inside and outside.
“Rather than endorsing security models that drive us to construct additional defenses and filters that have an increasingly slim chance of stopping advanced threats, the focus within IT development and security must shift to emphasize more aggressive and proactive self-assessment,” wrote Sherry and Kellermann. “Continuous monitoring is the first step in addressing the use of intelligent metrics to empower greater cyber-situational awareness within our government agencies.”
In practice, continuous monitoring can provide substantial gains for governments needing to fend off attacks that may go after email or other core applications. Governments and large private sector enterprises, as identified in a Trend Micro TrendLabs report, are the most typical targets for APTs, so any tool that is able to connect the dots between unusual activity (e.g., what happened with the State Department) and a possible APT in the system is immensely beneficial for organizations dealing with such problems.
How APTs and continuous monitoring tools match up
APTs can take many forms, but they are united by a fundamental design to infiltrate networks without any fanfare – say, through a seemingly innocuous email that actually had a malicious attachment someone downloaded – and remain hidden for long periods of times. They are low and slow, sacrificing short term splash for long term surveillance.
Considering the significant capabilities of their fellow nation-state actors that were behind breakthroughs such as the Stuxnet worm and possibly the Regin cyberespionage software, governments have to take the APT prospect seriously, especially since incidents with the State Department, U.S. Postal Service and the National Oceanic and Atmospheric Administration may have involved state-sponsored action. APTs are optimized to evolve quickly and stay out of sight, meaning that security teams can’t stand pat and wait for the APT to come out in the open. Continuous monitoring is the ideal way to stay vigilant and prepared.
The problem with APTs is essentially that they only need to get past network security solutions one time to potentially cause substantial harm, whereas defenses must be rock-solid around the clock. It is the age-old offense/defense divide, but it doesn’t mean that shielding government networks from APTs is impossible – only that cybersecurity has to evolve for a new reality.
Even though it has struggled to pass its audits for some time now, the VA exemplifies the shift that is underway in how agencies keep tabs on and respond to threats. The body’s Continuous Readiness in Information Security Program is a promising step toward endpoint security that is not a one-off and instead evolves, through employee education, to keep pace with modern threats.
Moreover, the VA’s vision is realistic. It takes into account how tension can arise between securing core data and enabling an environment in which everyone can work efficiently without having to jump through numerous hoops just to complete basic tasks. Security shouldn’t be sacrificed for convenience if possible, but knowing how to prioritize defenses and allocate limited resources is crucial.
To this end, continuous monitoring makes it possible to rank risk events and ultimately find the signal within the noise of data. Implements such as deep packet inspection, log inspection and integrity monitoring also help to stay ahead of the APTs that continue to challenge government agencies in cybersecurity. Agencies have options, but a sound strategy is required.