Fighting Advanced Persistent Threat: Detection & Remediation | InfoSec Perception

Francis Cianfrocca, a leading expert on Advanced Persistent Threats, continues his overview of the issues following his first article on the topic in the InfoSec Perception blog. What follows is Mr Cianfrocca’s work with minor edits from M. E. Kabay.

Advanced persistent threats (APTs) attack with privilege escalation and operate through application accesses that, to network monitors, appear to be fully normal in terms of network source addresses, protocol syntax-correctness, and user authentication / authorization levels. Both detection and remediation of these attacks are critical business objectives; whether driven by regulatory or operational sensitivities, data privacy and application security must be maintained and the flow of data must continue without interruption.

Detection

In order to reliably detect APT behaviour as it happens, it is necessary to analyse network traffic at the stream level.  The most effective approach to accomplishing this is to perform continuous inline analysis of all traffic on network links which access critical servers and information resources. This approach would be equally suitable for both enterprise networks and industrial supervisory control and data acquisition (SCADA) system networks.

The Bayshore Networks white paper “Advanced Persistent Threat: From Detection to Remediation” (available with simple registration) discusses three essential elements to consider for mounting an effective APT defence:

  1. Establish a pervasive network presence (sometimes called a “secure network fabric”) which requires that a protocol-inspection capability be present on all links in a complex application structure.
  2. Conduct deep protocol analysis, which requires a Layer-7 analysis of protocol streams, not just packet analysis. The stream inspectors must be able to isolate all elements of a data protocol, especially those containing data inputs from clients.
  3. Incorporate heuristic baselining. The application inspection system must construct a rich and multidimensional baseline of the behavioural patterns of each application, and store the baseline in a database that can be continuously added to. The database is then used to detect anomalous behaviour in real time. The detected anomalies are often indicative of APT attacks in progress.

The biggest operational challenge with baselining a large number of applications is the need for automation. There are various tools available to help with this, including certain open-source software [1] and enterprise applications [2], most of which have some degree of default rule sets and best practices already in place. There are potential problems with some of these, however, such as reporting false positives. There are also architectural considerations and requirements that need to be looked at very closely such as multiple protocols, dynamic routing, scalability, manageability, and cost-effectiveness.

Making It Work In Practice

Applications contain errors and security vulnerabilities that can be leveraged by attackers to compromise other applications. The threat can be reduced by manually scanning and remediating problems at the source-code level in each application, but this is expensive and time-consuming at best. At worst, it’s impossible due to the lack of access to application source code or technician availability. Application scans and audits provide no overall security assurance unless all the applications are regularly scanned and audited.

The most effective defence against APT is to collect heuristic profile data on all applications, focusing alert-response activities on those applications that are the most valuable or have the highest security-sensitivity or regulatory exposure. The objective is to provide measurable improvements in the operational availability of applications by inhibiting the attacks that compromise their integrity.

Application behaviours exhibited by determined persistent attackers are different enough from expected behaviours to be detectable with a high degree of confidence by comparison with heuristic baselines. This confidence is particularly true for the behaviours associated with the foot-printing and scanning phases of APT attacks. Executing this analysis in real-time has proven to be generally non-disruptive to application performance.

In a private cloud environment, for example, the range of possible vulnerability probes is multiplied by the topological proximity of applications, combined with the widespread use of common authentication and authorization platforms like Microsoft’s Active Directory™ and CA’s Netegrity™ product (SiteMinder). Proximity makes it easy for an attacker to reach multiple systems from a small number of compromised hosts, and common authentication means that the stolen or hijacked privileges can often get the attacker into those systems.

Real-time detection of behavioural anomalies can be readily used to block or fuzz these behaviours, thus inhibiting or retarding attacks on applications. The practical limitations of this approach are associated with the fine-tuning of automatically-collected behavioural metrics to filter out the learning of bad behaviours (reducing missed true negatives), and to complete the learning of correct behaviours (reducing false positives).

Information assurance policy, itself, must be heuristically-based and easily extended by both manual and automatic processes. Traditional and non-traditional security methods, including packet filters, intrusion detection system (IDS) products, and next-generation firewalls, provide significant value in high-end networks. But they do not, and cannot, provide the full range of information-assurance features needed to address today’s security challenges.

ADDITIONAL NOTES:

[1] See, for example, KoreLogic’s FTimes & WebJob, and CFEngine’s Community.

[2] E.g., Symantec Automation Suite, TripWire Enterprise, and BMC to name a few.

* * *

Francis Cianfrocca is Founder and CEO of Bayshore Networks, LLC which specializes in high-end IA products for a wide range of applications. Mr Cianfrocca is a noted expert in the fields of computer-language design, compiler implementation, network communications, and large-scale distributed application architectures. He has worked for a number of different companies either directly or as a consultant including Bank of New York, Gupta, McDonnell-Douglas and New York Life. A very strong advocate of open-source software development, he created several widely-used open projects, including the Ruby Net/LDAP library, and the EventMachine high-speed network-event management system. He is also a talented musician who attended the Eastman School of Music in the Music History department and studied for his Master’s Degree in Orchestral Conducting at the University of Michigan. Mr Cianfrocca is a member of the 2000 class of Henry Crown Fellows at the Aspen Institute

* * *

Copyright © 2012 Francis Cianfrocca & M. E. Kabay. All rights reserved.

Permission is hereby granted to InfoSec Reviews to post this article on the InfoSec Perception Web site in accordance with the terms of the Agreement in force between InfoSec Reviews and M. E. Kabay.