A Cybersecurity Incident Management Crash Course Linkedin Twitter Youtube Facebook...
It’s crucial to establish a Purple Team in your cybersecurity department. However, once this team is organized and clearly defined the next phase is to establish the cadence with which Purple Team activities occur and the scope of those activities. There can be a lot of environment-driven license with these activities, but clear examples of best-practices and proven techniques are available. Purple teaming activities can be equated to that of executing a sprint within an agile workflow or scrum team. A Purple Team engagement should typically be a two or three week cycle that involves both the assessment and remediation efforts. This requires discipline on both the Red and Blue teams and also helps scope the planned activities to a reasonable and achievable set of objectives.
Let’s assume you decide on a cadence of a two week time period for all activities to be conducted for a purple team engagement. The activities with the engagement include planning, assessing, collaborating, remediating, and reporting. It is important to note that there is not a required order to these activities. Planning should initiate the engagement, but additional planning will occur throughout the engagement period.
Within the planning phase, first the Red and Blue Teams must collaborate on what the outcome of the two week engagement period should be. This phase is imperative and is used to determine what gaps or issues need to be addressed from a program perspective. Planning concludes with a set of defined actions to perform, but it begins with establishment of objectives. What are the questions that the team expects to answer once the engagement is complete? Objectives should be clear, concise and unambiguous, and above all they must be achievable. Some examples of objectives for a purple team engagement:
Determine the ability to detect data exfiltration via DNS tunnelling
Evaluate effectiveness of lateral movement detection through analysis of netflow data
The objectives will drive the plan for testing activities, but how are the objectives determined? What questions do we need answers to from our activities? Locally-generated threat intelligence is an excellent resource, which is often gathered during the incident response process. If there are gaps in understanding how a previous malicious actor was able to perform their activities, these can become the questions which drive our objectives.
With the objectives set, the respective teams can plan their activities. All planned activities should be measured against whether they will support completion of an objective. This may be outside the “comfort zone” for the Blue Team, which traditionally operates in a more reactive en- vironment. The Blue Team cannot simply sit back and wait to see what is thrown at them. With a shared knowledge of the objectives, they must plan the activities that will provide the data to comparatively evaluate various detection and prevention mechanisms. Alternative theories should be tested to enable proper data collection among a range of potential solutions. Planning should be done for how and what data will be collected, with the end goal of supporting effective analysis and ultimately process refinement.
Though the plan should be robust, it is true that “No plan survives first contact with the enemy.” This is not to say that plans should be abandoned, but they should be refined throughout the assessment process.
If a planned attack path is blocked, what alternative methods are available that still support the engagement objectives? Red team activities must be managed to meet tactical milestones, and this is best accomplished by dividing larger objectives into discrete and manageable tasks for which progress is more easily measured. Given that any engagement is time constrained, it is vital to rapidly identify blockers that may degrade the ability to meet overall objectives. Artificially bypassing a defense isn’t “cheating” if the result is meaningful data that can drive improvement. A door that is locked today may be open tomorrow, and there is much greater value in using the limited time available meaningfully.
The Blue Team must be continuously evaluating their performance against expectations. If a planned method of detection does not appear, how can it be refined? What data are we not collecting that might provide us with indicators of compromise? Do we need to refine our methods of parsing data to detect signal through noise? It is likely that blue teams will identify not only gaps in technical capabilities, but in knowledge of adversary techniques as well. These gaps should drive research, but also collaboration.
For assessment activities, the Red Team must be extremely targeted and specific with the focus on achieving their goal within a short period of time. Thus the Red Team should start with a big goal and break it down into small phases that can be accomplished in one to two week iterations. When assessment activities are occurring, the blue team must know what the current protection mechanisms are that exist with respect to the targeted attacks and how they plan to detect the Red Team activities. Additionally, the Blue Team should also be conducting research on what additional techniques they might encounter by the red team and what additional controls they may need to have in place. This approach encourages a proactive mindset for Red and Blue Team activities.
It will be impossible for the Blue Team to accurately gauge whether they are detecting activities if they don’t know what activities are occurring. The answer to the question, “Did you see it?” is much more nuanced than “yes” or “no.” Consider these possible detection outcomes:
A Red Team activity was not logged
A Red Team activity was logged but the data did not generate an alert
An alert was generated but not triaged properly
An alert was acted upon, but the defensive response was ineffective
A defensive response was effective in closing the vector, but not timely enough to prevent the attacker’s objectives
For each of these potential outcomes, there are different lessons learned which will result in different process improvements. Each generates their own questions and threads to pull. But in the first three cases, the Blue Team cannot begin to ask these questions without an awareness that something happened. Thus to reap the benefits of purple teaming to their fullest, both sides need to have situational awareness of the totality of actions.
While it is true that Blue Teams can perform forensic activities after an engagement to gather more data, the opportunities to refine and tune techniques is lost once Red Team activities are complete. This means that real-time collaboration is required to fully achieve the return-on-investment from the engagement. To ensure that this collaboration oc- curs, engagements should include regular checkpoints to provide each team an opportunity to confirm their understanding of the actions of the other. This can take the form of daily stand- up meetings, real-time collaboration through chat, or via embedded liaisons from the opposite team. Not every detail needs to be shared, but enough collaboration must occur to allow teams to refine their activities to test alternative responses.
Red Teams benefit from this collaboration as well. Understanding the time required to detect and respond can directly influence the choice of tactics and techniques. If a detection has occurred and a playbook initiated which will ultimately thwart a line of attack, continuing on that path will provide limited value. It is better to understand early that defenses were effective and be able to make an informed decision as to whether to allow the activities to continue or move to the next objective.
Until this point, we have discussed purple engagements in the context of a discreet, time-constrained engagement of 2-3 weeks. Remediation activities can stretch for months (or longer), requiring resource approvals and procurement cycles. This does not mean that r emediation is not an integral aspect of any purple engagement. As issues are discovered, planning can begin on the necessary steps to eliminate the vulnerability and this plan can be generated collaboratively with the Red Team. Solutions that might appear adequate to the Blue Team can benefit from Red Team inspection, potentially preventing inadequate solutions that result in re-workv (or worse – unremediated vulnerabilities).
Red Teams benefit from remediation planning by gaining a more accurate understanding of the burdens associated with their recommendations. In almost any circumstance, there are multiple possible acceptable remediation solutions. Not all acceptable solutions may be equally “secure,” but through collaboration during the remediation process, Red Teams gain greater understanding of what is feasible. A feasible solution that is acceptable is always preferable to a perfect solution that cannot be implemented due to resource constraints.
Even feasible solutions have a resource burden for implementation. Red Teams that are exposed to realistic cost/benefit analysis of their recommendations will benefit from learning to prioritize recommendations that prioritize return on investment. If I can remediate 10 “High” severity” findings for the cost of remediating one “Critical,” is it the best course of action to prioritize remediation efforts based only on severity? Exposure to real resource decisions is crucial in assisting red teams make solid recommendations on the order of remediation. Red Teams that have no such exposure become philosophers on a hill.
Reports that are informed by both offensive and defensive activities provide a more holistic assessment of the environment, but they also carry greater credibility. When both sides concur on the need to devote resources to remediate issues, those recommendations are more likely to carry the day with decision makers.
Recommendations that are informed and tempered by resource constraints are more readily implementable. Recommendations that have the buy-in of those charged with implementation are more likely to be carried to fruition.
In short, the final page of any assessment should carry the signatures of both team leads. The report recommendations are the culmination of the engagement. We began with objectives to learn unknown aspects of our environment. Armed with newfound knowledge, a jointly-signed report communicates to leadership that a professional examination was administered which delivered recommendations free of tribal politics.
During the course of any engagement, threads will be uncovered which cannot be pulled under the scope of the current effort. The report should not neglect the consensus decision on the way forward. What have we uncovered that we should examine next? In addition to providing solid recommendations, a consensus on next steps will help garner the support to resource follow-on endeavors.
The most important thing to keep in mind regarding the reporting process of purple teaming is that it is iterative and dynamic, a result of each collaborative exercise.
A Cybersecurity Incident Management Crash Course Linkedin Twitter Youtube Facebook...
Closing the Gap Building a Robust Cybersecurity Team and Program...
What is Ransomware? Linkedin Twitter Youtube Facebook Ransomware has quickly...
The Cybersecurity Maturity Model (CMMC): Part 3 — So You...