top of page

Micro-segmentation from strategy to execution

Learn how to plan and execute your micro-segmentation project in AlgoSec’s guide.

Overview

Micro-segmentation is a technique to create secure zones in networks. It lets companies isolate workloads from one another and introduce tight controls over internal access to sensitive data. This makes network security more granular.

Micro-segmentation is an “upgrade” to network segmentation.

Companies have long relied on firewalls, VLANs, and access control lists (ACL) to segment their network. Network segmentation is a key defense-in-depth strategy, segregating and protecting company data and limiting attackers’ lateral movements.

Consider a physical intruder who enters a gated community. Despite having breached the gate, the intruder cannot freely enter the houses in the community because, in addition to the outside gate, each house has locks on its door. Micro-segmentation takes this an additional step further – even if the intruder breaks into a house, the intruder cannot access all the rooms.

What is Micro segmentation

Organizations frequently implement micro-segmentation to block lateral movement. Two common types of lateral movements are insider threats and ransomware.

Insider threats are employees or contractors gaining access to data that they are not authorized to access.

Ransomware is a type of malware attack in which the attacker locks and encrypts the victim’s data and then demands a payment to unlock and decrypt the data. If an attacker takes over one desktop or one server in your estate and deploys malware, you want to reduce the “blast radius” and make sure that the malware can’t spread throughout the entire data center.

And if you decide not to pay the ransom? Datto’s Global State of the Channel Ransomware Report informs us that:

  • The cost of downtime is 23x greater than the average ransom requested in 2019.

  • Downtime costs due to ransomware are up by 200% year-over-year.

Why Micro-segment?

With software-defined networks, such as Cisco ACI and VMware NSX, micro-segmentation can be achieved without deploying additional controls such as firewalls. Because the data center is software-driven, the fabric has built-in filtering capabilities. This means that you can introduce policy rules without adding new hardware.

SDN solutions can filter flows both inside the data center (east-west traffic) and flows entering or exiting the data center (north-south traffic).

The SDN technology supporting your data center eliminates many of the earlier barriers to micro-segmentation.

Yet, although a software-defined fabric makes segmentation possible, there are still many challenges to making it a reality.

The SDN Solution

A good filtering policy has three requirements:

1 – Allows all business traffic

The last thing you want is to write a micro-segmented policy and have it break necessary business communication, causing applications to stop functioning.

2 – Allows nothing else

By default, all other traffic should be denied.

3 – Future-proof

“More of the same” changes in the network environment shouldn’t break rules. If you write your policies too narrowly, then any change in the network, such as a new server or application, could cause something to stop working. Write with scalability in mind.

How do organizations achieve these requirements? They need to know what the traffic flows are as well as what should be allowed and what should be denied.

This is difficult because most traffic is undocumented. There is no clear record of the applications in the data center and what network flows they depend on. To get accurate information, you need to perform a “discovery” process.

What is a Good Filtering Policy


Micro-segmentation Blueprint


Discovery

You need to find out which traffic needs to be allowed and then you can decide what not to allow. Two common ways to implement a discovery process are traffic-based discovery and content-based discovery.


Traffic-Based Discovery


Traffic-based discovery is the process of understanding traffic flows: Observe the traffic that is traversing the data center, analyze it, and identify the intent of the flows by mapping them to the applications they support.

You can collect the raw traffic with a traffic sniffer/network TAP or use a NetFlow feed.


Content-based or Data-Based Approach


In the content-based approach, you organize the data center systems into segments based on the sensitivity of the data they process. For example, an eCommerce application may process credit card information which is regulated by the PCI DSS standard. Therefore, you need to identify the servers supporting the eCommerce application and separate them in your filtering policy.


Discovering traffic flows within a data center


Micro-segmentation Blueprint


Using NetFlow for Traffic Mapping

The traffic source on which it is easiest to base application discovery is NetFlow. Most routers and switches can be configured to emit a NetFlow feed without requiring the deployment of agents throughout the data center.

The flows in the NetFlow feed are clustered into business applications based on recurring IP addresses and correlations in time. For example, if an HTTPS connection from a client at 172.7.1.11 to 10.3.3.3 is observed at 10 AM, and a PostgreSQL connection from the same 10.3.3.3 to 10.1.1.1 is observed 0.5 seconds later, it’s clear that all three systems support a single application, which can be labeled with a name such as “Trading System”.

172.7.1.0/2410.3.3.3

TRADE SYS

HTTPS10.3.3.3

TRADE SYS

10.1.1.11

DB

TCP/543210.3.3.7

FOREX

10.1.1.11

DB

TCP/5432


Identifying traffic flows in common, based on shared IP addresses


NetFlow often produces thousands of “thin flow” records (one IP to another IP), even for a single application. In the example above, there may be a NetFlow record for every client desktop. It is important to aggregate them into “fat flows” (e.g., that allows all the clients in the 172.7.1.0/24 range). In addition to avoiding an explosion in the number of flows, aggregation also provides a higher-level understanding, as well as future-proofing the policies against fluctuations in IP address allocation.

Using the discovery platform in the AlgoSec Security Management Suite to identify the flows in combination with information from your firewalls can help you decide where to put the boundaries of your segments and which policies to put in these filters.


Micro-segmentation Blueprint


Defining Logical Segments

Once you have discovered the business applications whose traffic is traversing the data center (using traffic-based discovery) and have also identified the data sensitivity (using a content-based approach) you are well positioned to define your segments.

Bear in mind that all the traffic that is confined to a segment is allowed. Traffic crossing between segments is blocked by default – and needs to be explicitly allowed by a policy rule.

There are two potential starting points:

  1. Segregate the systems processing sensitive data into their own segments. You may have to do this anyway for regulatory reasons.

  2. Segregate networks connecting to client systems (desktops, laptops, wireless networks) into “human-zone” segments. Client systems are often the entry points of malware, and are always the source of malicious insider attacks.

Then, place the remaining servers supporting each application, each in its own segment. Doing so will save you the need to write explicit policy rules to allow traffic that is internal to only one business application.


Example segment within a data center


Micro-segmentation Blueprint


Creating the Filtering Policy

Once the segments are defined, we need to write the policy. Traffic confined to a segment is automatically allowed so we don’t need to worry about it anymore. We just need to write policy for traffic crossing micro-segment boundaries.

Eventually, the last rule on the policy must be a default-deny: “from anywhere to anywhere, with any service – DENY.” However, enforcing such a rule in the early days of the micro-segmentation project, before all the rest of the policy is written, risks breaking many applications’ communications. So start with a (totally insecure) default-allow rule until your policy is ready, and then switch to a default-deny on “D-Day” (“deny-day”). We’ll discuss D-Day shortly.


What types of rules are we going to be writing?


  • Cross segment flows – Allowing traffic between segments: e.g., Allow the eCommerce servers to access the credit-card

  • Flows to/from outside the data center – e.g., allow employees in the finance department to connect to financial data within the data center from their machines in the human-zone, or allow access from the Internet to the front-end eCommerce web servers.


Users outside the data center need to access data within the data center


Micro-segmentation Blueprint


Default Allow – with Logging

To avoid major connectivity disruptions, start your micro-segmentation project gently. Instead of writing a “DENY” rule at the end of the policy, write an “ALLOW” rule – which is clearly insecure – but turn on logging for this ALLOW rule. This creates a log of all connections that match the default-allow rule. Initially you will receive many logs entries from the default-allow rule; your goal in the project is to eliminate them.

To do this, you go over the applications you discovered earlier, write the policy rules that support each application’s cross-segment flows, and place them above the default-allow rule. This means that the traffic of each application you handle will no longer match the default-allow (it will match the new rules you wrote) – and the amount of default-allow logs will decrease.

Keep adding rules, application by application, until the final allow rule is not generating any more logs. At that point, you reach the final milestone in the project: D-Day.


Micro-segmentation Blueprint


Preparing for “D-Day”

Once logging generated by the default-allow rule ceases to indicate new flows that need to be added to your filtering policy, you can start preparing for “D-Day.” This is the day that you flip the switch and change the final rule from “default ALLOW” to “default DENY.” Once you do that, all the undiscovered traffic is going to be denied by the filtering fabric, and you will finally have a secured, micro-segmented, data center. This is a big deal!

However, you should realize that D-Day is going to cause a big organizational change. From this day forward, every application developer whose application requires new traffic to cross the data center will need to ask for permission to allow this traffic; they will need to follow a process, which includes opening a change request, and then wait for the change to be implemented. The free-wheeling days are over.

You need to prepare for D-Day. Consider steps such as:

  • Get management buy-in

  • Communicate the change across the organization

  • Set a change control window

  • Have “all hands on deck” on D-Day to quickly correct anything that may have been missed and causes applications to break

Micro-segmentation Blueprint


Change Requests & Compliance


Notice that after D-Day, any change in application connectivity requires filing a “change request”. When the information security team is evaluating a change request – they need to check whether the request is in line with the “acceptable traffic” policy.

A common method for managing policy at the high-level is to use a table, where each row represents a segment, and every column represents a segment. Each cell in the table lists all the services that are allowed from its “row” segment to its “column” segment.

Keeping this table in a machine readable format, such an Excel spreadsheet, enables software systems to run a what-if risk-check that compares each change-request with the acceptable policy, and flags any discrepancies before the new rules are deployed.

Such a what-if risk-check is also important for regulatory compliance. Regulations such as PCI and ISO27001 require organizations to define such a policy, and to compare themselves to it; demonstrating the policy is often part of the certification or audit.

A Blueprint for Creating a Micro-segmentation Policy

The AlgoSec Security Management Suite (ASMS) makes it easy to define and enforce your micro-segmentation strategy inside the data center, ensuring that it does not block critical business services and does meet compliance requirements.

AlgoSec’s powerful AutoDiscovery capabilities help you understand the network flows in your organization. You can automatically connect the recognized traffic flows to the business applications that use them. Once the segments are established, AlgoSec seamlessly manages the network security policy across your entire hybrid network estate. AlgoSec proactively checks every proposed firewall rule change request against the segmentation strategy to ensure that the change doesn’t break the segmentation strategy, introduce risk, or violate compliance requirements.


AlgoSec enforces micro-segmentation by:


  • Generating a custom report on compliance enforced by the micro-segmentation policy

  • Identifying unprotected network flows that do not cross any firewall and are not filtered for an application

  • Automatically identifying changes that violate the micro-segmentation strategy

  • Automatically implementing network security changes

  • Automatically validating changes


Security zones in AlgoSec’s AppViz


Want to learn more? Get a personal demo

Enabling Micro-segmentation with AlgoSec

AlgoSec, a global cybersecurity leader, empowers organizations to secure application connectivity by automating connectivity flows and security policy, anywhere.  The AlgoSec platform enables the world’s most complex organizations to gain visibility, reduce risk and process changes at zero-touch across the hybrid network.   AlgoSec’s patented application-centric view of the hybrid network enables business owners, application owners, and information security professionals to talk the same language, so organizations can deliver business applications faster while achieving a heightened security posture.  Over 1,800 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks, while taking advantage of almost two decades of leadership in Network Security Policy Management.  See what securely accelerating your digital transformation, move-to-cloud, infrastructure modernization, or micro-segmentation initiatives looks like at www.algosec.com


Want to learn more about how AlgoSec can help enable micro-segmentation?


Schedule a demo.

About AlgoSec

Overview

What is Micro segmentation

Why Micro-segment?

The SDN Solution

What is a Good Filtering Policy

A Blueprint for Creating a Micro-segmentation Policy

Enabling Micro-segmentation with AlgoSec

About AlgoSec

Get the latest insights from the experts

Choose a better way to manage your network

bottom of page