Saturday, September 20, 2014

Getting Inside the Threat Actor’s OODA Loop to Stop Undetectable TTPs

In my last Science of Security post I shared information on the Science of Security discipline and the core themes being researched in the discipline. I recommended in that post that organizations should consider hiring cyber security scientists to help organizations in developing a strong, rigorous scientific foundation to cyber security while providing structure and organization to a broad-based body of knowledge in the domain.

In this post I’ll walk us through a 2009 experiment I conducted in the Science of Security core theme of Attack Analysis. While this experiment is a good 5 years old and cyber tradecraft has moved forward since then, it provides a good example of applying the scientific method to the hard cyber security problems that many organizations face when dealing with the escalating threats in cyberspace. As we progress through the different steps of the scientific method I’ll align each step in the example to other common processes familiar in both cyber and business operations such as the OODA Loop, PDCA cycle (ISO-9001), Intelligence cycle, and Intelligence Feedback loop to help foster greater understanding. I've also created a graphic to help visual learners see how the various cycles align.


What is the scientific method?

The scientific method is the process by which science is carried out. Science builds on previous knowledge, and this can lead to improvements and refinements over time. The process starts with a question. Scientists then formulate a general theory about what the answer is, conduct research to form a more specific hypothesis with predictions of what we think will happen, develop procedures to test the hypothesis, conduct an experiment to prove / disprove the hypothesis, measure the results (observations vs predictions), and complete the process by publishing the results to formally capture the knowledge.

Question – What’s the hard problem?

Most of the organizations I’ve worked with identify hard problems or challenges that the organization is facing and where the organization would like help in finding a solution. The question for our 2009 example was asked by a senior executive in response to “What keeps you up at night?”

How do I defend against threat actors who are conducting active cyber operations using TTPs that are currently undetected by the commercial cyber security solutions providing defense in depth in my enterprise? How can I increase detection and prevent more of these attacks from being successful?

I’ve heard this question stated a few different ways by a few different leaders over the years and it’s really a challenge we all face. As a scientist it’s my job to build and expand the knowledge in the domain. I’m going to develop a general theory based on what is known and then conduct research to make more specific predictions with a hypothesis that can be tested by experiment. The results of the scientific method can be shared with other scientists for them to validate the results or shared with engineers who can leverage the new knowledge to develop new solutions. This helps organizations understand where they should invest their limited time and resources to get the best return on investment. As the old saying goes, knowledge is power.

Theory

When developing a theory, you have to really examine and dissect the question. A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested. My theory for this hard problem was the following:

To defend against threat actors who are conducting active cyber operations using TTPs that are currently undetected by commercial cyber security solutions the defender has to get inside the threat actors attack cycle or OODA loop (between when the threat actor starts an attack and the time they target the defender’s organization) in order to identify and develop countermeasures that will render the attack unsuccessful and give the defender an advantage.

Whether you spent time in military operations or business operations, it’s widely accepted that if you’re spinning your operations tempo faster than your competitor or adversary you’ll come out ahead. This is the basis of the decision cycle developed by USAF Colonel John Boyd called the Observe-Orientate-Decide-Act (OODA) loop.


The OODA loop focuses on strategic requirements and works well with the Plan-Do-Check-Act (PDCA) cycle which focuses on the operational or tactical requirements which together enables organizations to perform adaptive (cyber defense) cycles . The OODA loop also serves to explain the nature of surprise and shaping operations in a way that unifies Gestalt psychology, cognitive science, and game theory. We want to shape the threat actor’s active operation to the defenders advantage. The threat actor will be surprised the TTP they worked so hard to make undetectable was not successful against the defending organization which could in turn generate doubt and confusion for the threat actor.

Research

The research phase of the scientific method is where scientists identify sources of raw data and information needed to form an explanatory hypothesis with predictions, identify resource requirements, and research what procedures need to be followed to test the hypothesis.

The theory here is that the defending organization needs to observe that the unknown threat actor has started an attack cycle, orientate on the threat actor’s undetectable TTPs, decide which observable patterns are indicators for the TTP, and to take action to defend against the TTP. Research needs to figure out how to take this general theory and form a more specific and testable hypothesis.

The scientist has to also consider the human elements at play in the experiment such as the level of maturity of the defending organization and the level of maturity of the threat actors.

The defender had implemented defense in depth, conducted user training, carried out routine cyber hygiene, and had analysts focused on cybercrime and advanced persistent threats (APT) in addition to traditional security operations. In this case the defending organization was in the upper half of the maturity model.

The threat actors are unknown so the level of maturity will be assessed based on the leveraged TTP and level of TTP detection by cyber security solutions. For this hypothesis and experiment we focused only on threat actor TTPs that had little or no detection (< 10%) by security solutions. Threat actors were assessed to be in the mid to upper half of the maturity model if they were able to render their TTP to have little or no detection during active cyber operations.

The scientist has to understand what tactics, techniques, and procedures (TTPs) of the threat actor is observable. In other words, TTP data that can the defender see with their eyes or via technology based sensors in the cyber terrain. TTPs really fall into 2 general categories. Attack Patterns and Malware.

Attack patterns are blue prints for the process or method used by the threat actor when conducting social engineering attacks, supply chain attacks, communications attacks, software attacks, physical security attacks, or hardware attacks. CAPEC is an example of a free, publically available, community developed list of common attack patterns. Threat intelligence analysts identify which attack patterns are present when analyzing threat actor’s full attack cycles.

https://capec.mitre.org/

Mature defending organizations will also leverage attack patterns and common attack pattern ID numbers in security operations and security engineering when conducting penetration testing and secure code review as a standardized way to capture knowledge about how specific parts of an attack are designed and executed, while providing the attacker’s perspective on the problem and the solution, and also offers guidance on ways to mitigate the attack’s effectiveness. Attack patterns focus on the human threat actor’s TTPs whereas malware is the threat actor’s configured and deployed technological TTP.

Malware is the type of TTP we’ll focus on in this example because malware analysis data contains the observable information the defender needs to orient on in order to decide on the best action to take to defend against the TTP. Remember, we don’t know who the threat actors are, how they are delivering the malware TTP to their targeted victims, or which specific organizations are being targeted by the threat actors.

One of the predictions we’ll test is whether or not there actually are unknown attacks by unknown threat actors that the defender can discover and whether or not they can develop countermeasures before their organization gets hit. Since we don’t know who the threat actor(s) is, how they are delivering the TTP or who the targeted victims are, the defending organization has no direct indications and warnings that an attack is even happening against other organizations or if their organization will be targeted. The scientist has to look at all the components of the attack during the research phase to discover possible sources of information that can be used.

Every cyber attack has 4 basic components: Threat Actors, TTPs, Cyber Terrain, and Defenders. Cyber security scientists have to analyze each of these items to include getting inside the heads of both the threat actor and the defender. We’ve identified that we don’t know who the threat actor(s) is, what TTP they are using, or what cyber terrain the threat actor is using to deliver the TTP or which defender organizations are being targeted. An assumption we are testing is that threat actors have successfully delivered the TTP to unknown defender organizations so we need to focus on possible indications and warnings we could discover based on the typical courses of action carried out by the defending organization.

We need to get into the mind of the defender. When a defender discovers malware or a suspicious file inside their organization through triage, an alert user reporting a suspicious email, or during incident response, the defender wants to analyze the suspicious file or malware to understand the observable behaviors and actions it performs. If the defender’s organization doesn’t have internal malware analysis tools the defender will normally leverage community tools in the public domain such as VirusTotal, ThreatExpert, or other similar online solutions. This presented an opportunity for us to possibly discover active attacks that we were previously unaware of based on what others are seeing prior to the attack targeting our organization.

Remember, this experiment example was from 2009, threat intelligence and threat sharing didn’t really emerge in the mainstream cyber security community until around 2011-2012. While there was limited sharing of very basic threat intelligence, it was primarily ad-hoc silos in industry verticals and these sources of information weren’t considered during this research nor was commercially available threat data like black lists. That data was already being used by the defending organization so the indicators produced during this research experiment would supplement any existing threat sharing and commercial available threat data.

For this research experiment we would focus on malware analysis reports from the public domain that were produced during the last 24 hours that 1) had little (< 10%) or no antivirus detection and 2) contained observable command and control communications. The malware analysis information meeting these conditions represents the “undetectable TTPs” leveraged by unknown threat actors who have targeted unknown defending organizations.

Inside the malware analysis data, command and control communications were selected to be the observable indicators for the threat actor’s TTP (the malware). This data was selected because this is the phase of the attack cycle that is observable within the defending organization’s cyber terrain just after the TTP was delivered and successfully installed on an internal asset as it attempts to notify the threat actor it successfully breached the targeted organization. If we can stop the TTP at this last line of defense, the threat actor will not know where specifically the threat was stopped in the attack cycle after it was delivered to the defender’s organization.

Now that we’ve identified the sources of information needed and the type of information available to us, we can solidify our hypothesis, develop our procedures, and resource requirements. While the research phase only required the cyber security scientist, conducting affordable experiments in real world cyber operations requires leadership buy in and approval of additional resources. For this effort, the following resources were identified and approved:

3 hours per day for a Threat Intel Analyst to analyze the malware analysis data from the previous 24 hours.

1 hour per day for planning intelligence driven courses of action (COA)

3 hours per day for implementing intelligence driven courses of action for mitigation and countermeasures.

1 hour per day for the lead scientist to oversee and monitor the ongoing experiment and overall research project.

Total human resources needed was equal to 1 full time employee divided into different roles for the 6 months of the experiment. The lead scientist was also covered full time during the research, measurement, and conclusion phases of the scientific method that occurred before and after the 6 month experiment.

No special hardware or software was needed to carry out the experiment.

The research phase of the scientific process aligns to the collection and processing phase of the intelligence cycle. The scientific process needed to develop sources of information to collect and process to help answer the question just as the intelligence cycle develops sources of information to collect and process to respond to a request for information (RFI) or ongoing information needs. This also aligns to the observation phase of the OODA loop where the defender must observe the information coming from these sources in order to analyze or orientate on the observations.

Hypothesis

A hypothesis is an educated guess about how things work. Most of the time a hypothesis is written like this: "If _____[I do this] _____, then _____[this]_____ will happen." The hypothesis should be something we can actually test where we can measure if the predictions match the real world observables. This phase of the scientific process is aligned to the intelligence cycle analysis phase and the orientation phase of the OODA loop. Scientists and intelligence analysts want to predict or forecast what will happen based on their analysis.

For this research experiment we can hypothesis the following:

If we monitor online malware analysis sites then we can identify TTPs containing command and control communications where the TTP had little (< 10%) or no antivirus detections that had been submitted and analyzed in the past 24 hours.

If we can identify TTPs containing command and control communications with little (< 10%) or no antivirus detection that were submitted and analyzed during the previous 24 hours then we can produce a daily threat intelligence product on the TTP with the command and control communications as observable indicators and a recommended course of action the defending organization should take to mitigate the threat actor’s TTP.

If we can produce and disseminate the daily threat intelligence report to security operations then we can plan and implement courses of actions based on the intelligence.

If we can implement the recommended course of action using the observable indicators provided from the threat intelligence to block the TTP’s command and control communications then we can prevent the attack from being successful since the threat actor doesn’t know where in the attack cycle the defender stopped the TTP.

If we can detect the blocked command and control observable indicators for the TTP within the defender’s cyber terrain then we can identify which internal asset the command and control information is coming from to contain the incident, and through the internal investigation and incident response process we can build out a complete attack cycle for the threat actor’s active cyber operation from delivery (when it first enters the defender’s cyber terrain) to command and control (when it exits the defender’s cyber terrain).

If we can build out the entire attack cycle from delivery through command and control we can then pivot on the entire set of observables directly associated with the threat actor’s use of the external cyber terrain and operationally configured TTP to determine if we have seen attacks by this threat actor before and if the attack is part of a larger campaign.

If we can build out the entire attack cycle from delivery through command and control then we can develop observable indicators for each phase of the attack to enable mitigations and countermeasures to be developed to stop the attack earlier in the attack cycle as it enters and moves through the defender’s cyber terrain.

Procedures

The procedure phase of the scientific method in this example covered the specific procedure that needed to be carried out during the experiment. In this case procedures for the threat intelligence analyst to analyze, produce, and disseminate the daily threat intelligence report as well as the procedures to be followed for planning and implementing the intelligence driven courses of action. The follow on actions for investigations and incident response were not included since the organization already had robust procedures in place for these activities and the primary focus this experiment was on the discovery and mitigate of unknown attacks by unknown threat actors.

The scientific method process here is aligned to the production and dissemination phases of the intelligence cycle, the decide and act phases of the OODA loop, the plan and do phases of the PDCA cycle, the planning and implementing courses of action in the intelligence feedback cycle.

Experiment

The 2009 experiment was approved for 6 months. Experiments need to run long enough to prove or disprove the hypothesis. The procedures developed for the experiment are carried out during the time frame of the experiment.

Data Analysis – Results of Experiment (Did observations validate predictions?)

At the end of the experiment we enter the data analysis phase of the scientific process to determine if our hypothesis predictions were correct or incorrect based on the real world observations of the experiment. This phase of the scientific process is aligned to the measuring phase of the intelligence feedback cycle and the check phase of the PDCA cycle.

Conclusion

The 2009 experiment I lead and used in this example validated the predictions of the hypothesis. We were able to validate there was unknown attacks taking place against other unknown organizations based on submissions of malware samples to online malware analysis sites. We were able to validate that we could discover TTPs with little (< 10%) or no antivirus detection and through analysis generate a daily threat intelligence report. We were able to validate that we could plan and implement courses of action to mitigate the threat based on the threat intelligence reports. We were able to validate that we got inside the threat actors attack cycle and ahead of threat when data analysis revealed 72% of the total number of detections at the command and control phase of the attack cycle were attributed to the indicators produced as part of the 6 month experiment. (The remaining 28% of detections at this stage of the attack cycle were attributed to commercially available block lists / black lists the customer was paying for and threat intelligence shared by partners.) We were also able to validate the attack cycle intelligence gain for each of the detections through building out the complete attack cycle and follow on analysis as part of the incident response process.

We recognized in the conclusion that if the defending organization was the only target or was the first X number of victims to be targeted by the threat actor that this effort would have little to no impact since there wasn’t an opportunity to get ahead of the threat actor’s attack cycle.

We also recognized and recommended that organizations could benefit from establishing more robust threat information sharing to allow greater opportunities to get inside the threat actor’s attack cycles and to increase the discovery of unknown threat actors and campaigns.

The conclusion phase of the scientific method is where we present the results of our experiment to include lessons learned, recommendations for follow up experiments, new standard operating procedures, and possible technology based solutions development. This step of the scientific method closely aligns to the feedback part of the intelligence feedback cycle and the act part of the PDCA cycle. An example of this is providing feedback on the information quality of the threat intelligence reports produced during the experiment. Was the threat intelligence Accurate? Timely? Usable? Complete? Precise? Reliable? Relevant? Predictive? Or Tailored? This helps push quality improvement and justification for continued investment.

I hope by sharing this example of applying the scientific method to a hard problem in real-world cyber operations that it helps others understand the role of cyber security scientists and how they might fit within your organization.


Tuesday, September 9, 2014

Science of Security: Does Your Cyber Security Team Include Cyber Security Scientists?


If you haven’t heard of the “Science of Security” before you’re not alone. This post will take a quick look at the Science of Security and the core foundational themes within the discipline to help provide some insight to understand why cyber security scientists should be part of an organizations cyber security team.

Many cyber security teams today struggle with making the leap from analyzing raw security data and identifying patterns in security information to being able to expand or produce new knowledge and enable predictability. Knowledge is the layer in the data pyramid between the information layer and the intelligence layer. A cyber security scientist, in a broad sense, is one engaging in a systematic activity to acquire knowledge in the cyber security domain. They help turn the raw security data and information into usable knowledge the organization can take advantage of.

The Science of Security term is well known within leading academic and government cyber security / information assurance centers and is considered by experts to be one of the fundamental “game-changing” concepts in cyber security.

Many cyber security university graduates entering the workforce today have been involved with Science of Security academic research projects. Organizations need to look at creating security scientist positions on their security teams to take advantage of this more fruitful way to ground research, and to nurture and sustain progress in the kinds of cyber security solutions that benefit the organization.

There are also many Science of Security scientists like myself who have conducted scientific research in real-world, large scale cyber operations running both limited scope experiments as well as at-scale predictability experiments across global enterprises that are validated by analysis of real-world observations and feedback.

The Science of Security term has been around since 2010 when an independent science and technology advisory committee for the U.S. Department of Defense concluded there is a science of (cyber) security discipline. The committee made recommendations that the DOD sponsor multiple cyber security science based centers and projects within universities and other research centers.


The following year, 2011, the White House released “Trustworthy Cyberspace: Strategic Plan For The Federal Cybersecurity Research And Development Program” formally establishing the Science of Security as 1 of 4 key strategic thrusts for U.S. Federal cybersecurity R&D programs.


The United States government also signed a Science of Security Joint Statement of Understanding with the governments of Canada and the United Kingdom in 2011 establishing 7 core themes that together form the foundational basis for the Science of Security discipline. The core themes are strongly inter-related, and mutually inform and benefit each other. They are:

·         Attack Analysis
·         Common Language
·         Core Principles
·         Measurable Security
·         Agility
·         Risk
·         Human Factors


I’ve spent the last couple decades working a wide range of cyber & intelligence positions inside the Defense and Intelligence Community with the last several years focused on the Science of Security core theme of Attack Analysis.

In this theme we apply the scientific method to the analysis of cyber attacks. The scientific method is also what many intelligence analysts use during the analysis and production phase of the intelligence cycle.

In the data pyramid, the intelligence layer sits between the knowledge layer and the wisdom layer. The knowledge produced during attack analysis enables us to produce predictable intelligence products that can be validated for accuracy with observations and reported back through the intelligence feedback process.

Attack analysis scientists seek to understand and explain the attack. The analysis is driven by the data and information available to the scientist but generally includes areas such as:

·         The threat actor (type of threat actor, sophistication level, technology preferences, operating tempo, objectives, etc)
·         If this attack is part of a larger campaign by the threat actor
·         The threat actor’s tactics, techniques, and procedures (TTPs) to include attack patterns, tools, and malware
·         The threat actors use the cyber terrain (People – Cyber Persona  – Logical Layer (top 6 layers of the OSI) – Physical Layer  – Geographic Layer)
·         Identification of observables for different phases of the attack lifecycle that are indictors of the threat actor’s attack
·         The threat actor’s exploit target within the defender’s cyber terrain (Configuration, Vulnerability, or Weakness)
·         Analysis of the vulnerability score, weakness score customized for the defenders mission, and scoring how susceptible the defender is to the attack.
·         Identification of courses of action (COA)the defender should take to mitigate or defend against the attack
·         If the attack resulted in an incident, what actions did the threat actor take and what was the objective of those actions (cover tracks, data destruction, data modification, data theft, etc)
·         The defenders use of their cyber terrain across the five layers
·         The defenders tactics, techniques, and procedures (TTPs) to include tools and defender courses of action (COAs)
·         Analysis / measurement of the defenders operations tempo and policies
·         Analysis of the threat actors operational tempo vs the defenders operational tempo to determine threat susceptibility predictions


The knowledge produced from attack analysis can be shared with other scientists through publication to enable validation of the theory for those in academic or research laboratory environments, shared as cyber threat intelligence for those working in operational environments, or shared with engineers to development the next generation of security solutions.

If you are going to share knowledge with others, you should consider using a common language and well defined core principles. We hear from the data science community all the time that most data scientists spend 50% to 80% of their time just wrangling the data into usable formats. The Science of Security Core theme of Common Language focuses on the construction of a common language(s) and set of core principles about which the security community can develop a shared understanding and will facilitate the testing of hypotheses and validation of concepts.

Common Languages and well defined Core Principles are also strongly inter-related to the core theme of Measurable Security. We want to be able to measure how secure a device is compared to another device or rank a group of weaknesses or measure risk in standardized repeatable ways.

A good example of activity in this area that has been developed through government, industry, and academia collaboration are the Making Security Measurable efforts lead by Mitre. These common languages and formats are both human and machine readable. The use of machine readable formats connects us to the Science of Security core theme of Agility.


In the Science of Security core theme of Agility, one of the key focuses is security automation to include areas such as continuous monitoring, continuous diagnostics, semi-automated and automated courses of action. Automated Courses of Action (ACOAs) are strategies that incorporate decisions made and actions taken in response to cyber situations. Automation frees humans to do what they do well – think, ask questions, and make judgments about complex situations.  

Automation allows the speed of response to approach the speed of attack rather than relying on human speed responses. It’s fairly common knowledge that if a defender wants to get ahead of the threat actor, the defender needs to spin the defense cycle at a faster speed then the threat actor spins the attack cycle. Automation is aimed at helping the defender increase the spin rate of the defense cycle to enable better resiliency against the attack cycle.

The U.S. Department of Homeland Security described this in the 2011 paper “Enabling Distributed Security in Cyberspace” which explores the idea of a healthy, resilient – and fundamentally more secure – cyber ecosystem of the future, in which cyber participants, including cyber devices, are able to work together in near-real time to anticipate and prevent cyber attacks.


Another key focus of the Agility theme is Interoperability. DHS describes three types of interoperability that are fundamental to integrating the many disparate participants into a comprehensive cyber defense system that can create new intelligence and make and implement decisions at machine speed:

1.  Semantic Interoperability: The ability of each sending party to communicate data and have receiving parties understand the message in the sense intended by the sending party.
2.  Technical Interoperability: The ability for different technologies to communicate and exchange data based upon well defined and widely adopted interface standards.
3.  Policy Interoperability: Common business processes related to the transmission, receipt, and acceptance of data among participants.

Interoperability enables common operational pictures and shared situational awareness to emerge and disseminate rapidly. The creation of new kinds of intelligence (such as fused sensor inputs), coupled with rapid learning at both the machine and human levels, could fundamentally change the cyber security ecosystem.

Within cyber security, all three types of interoperability are being enabled through an approach that has been refined over the past decade by many in industry, academia, and government. Here are some examples.

·         Enumerations such as common attack patterns (CAPEC) or public vulnerabilities (CVE).
·         Languages and Formats for Structured Threat Information eXpression (STIX), Cyber Observable eXpression (CYBOX), and Malware (MAEC).
·         Knowledge Repositories such as security best practices, security benchmarks, and security checklists.

Automation and interoperability are exciting areas that hold a lot of promise for helping to increase the spin rate of the defenders operational tempo. They are enablers that teach machines how to read and write the languages developed by the community. This lays the foundation for future work where we can better organize and more formally represent the domain knowledge using technology such as semantic web ontologies.

Ontologies in turn would allow the machine understand the meaning of the data. Once machines understand the meaning of the data we can then enable them to reason about domain knowledge and the ability for machines to infer new knowledge based on existing knowledge. This in turn could enable further automated courses of action in areas that require reasoning before deciding on an action to take.

The Science of Security is on the cutting edge of security R&D and security scientists are leading the charge for the discovery of new domain knowledge. Organizations should consider hiring cyber security scientists to help organizations in developing a strong, rigorous scientific foundation to cyber security while providing structure and organization to a broad-based body of knowledge in the domain.