

Search results
622 results found with an empty search
- AlgoSec | 5 Types of Firewalls for Enhanced Network Security
Firewalls form the first line of defense against intrusive hackers trying to infiltrate internal networks and steal sensitive data. They... Firewall Change Management 5 Types of Firewalls for Enhanced Network Security Asher Benbenisty 2 min read Asher Benbenisty Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 10/25/23 Published Firewalls form the first line of defense against intrusive hackers trying to infiltrate internal networks and steal sensitive data. They act as a barrier between networks, clearly defining the perimeters of each. The earliest generation of packet-filter firewalls were rudimentary compared to today’s next-generation firewalls, but cybercrime threats were also less sophisticated. Since then, cybersecurity vendors have added new security features to firewalls in response to emerging cyber threats. Today, organizations can choose between many different types of firewalls designed for a wide variety of purposes. Optimizing your organization’s firewall implementation requires understanding the differences between firewalls and the network layers they protect. How Do Firewalls Work? Firewalls protect networks by inspecting data packets as they travel from one place to another. These packets are organized according to the transmission control protocol/internet protocol (TCP/IP), which provides a standard way to organize data in transit. This protocol is a concise version of the more general OSI model commonly used to describe computer networks. These frameworks allow firewalls to interpret incoming traffic according to strictly defined standards. Security experts use these standards to create rules that tell firewalls what to do when they detect unusual traffic. The OSI model has seven layers: Application Presentation Session Transport Network Data link Physical Most of the traffic that reaches your firewall will use one of the three major Transport layer protocols in this model, TCP, UDP, or ICMP. Many security experts focus on TCP rules because this protocol uses a three-step TCP handshake to provide a reliable two-way connection. The earliest firewalls only operated on the Network Layer, which provides information about source and destination IP addresses, protocols, and port numbers. Later firewalls added Transport Layer and Application Layer functionality. The latest next-generation firewalls go even further, allowing organizations to enforce identity-based policies directly from the firewall. Related Read : Host-Based vs. Network-Based Firewalls 1. Traditional Firewalls Packet Filtering Firewalls Packet-filtering firewalls only examine Network Layer data, filtering out traffic according to the network address, the protocol used, or source and destination port data. Because they do not inspect the connection state of individual data packets, they are also called stateless firewalls. These firewalls are simple and they don’t support advanced inspection features. However, they offer low latency and high throughput, making them ideal for certain low-cost inline security applications. Stateful Inspection Firewalls When stateful firewalls inspect data packets, they capture details about active sessions and connection states. Recording this data provides visibility into the Transport layer and allows the firewall to make more complex decisions. For example, a stateful firewall can mitigate a denial-of-service attack by comparing a spike in incoming traffic against rules for making new connections – stateless firewalls don’t have a historical record of connections to look up. These firewalls are also called dynamic packet-filtering firewalls. They are generally more secure than stateless firewalls but may introduce latency because it takes time to inspect every data packet traveling through the network. Circuit-Level Gateways Circuit-level gateways act as a proxy between two devices attempting to connect with one another. These firewalls work on the Session layer of the OSI model, performing the TCP handshake on behalf of a protected internal server. This effectively hides valuable information about the internal host, preventing attackers from conducting reconnaissance into potential targets. Instead of inspecting individual data packets, these firewalls translate internal IP addresses to registered Network Address Translation (NAT) addresses. NAT rules allow organizations to protect servers and endpoints by preventing their internal IP address from being public knowledge. 2. Next-Generation Firewalls (NGFWs) Traditional firewalls only address threats from a few layers in the OSI model. Advanced threats can bypass these Network and Transport Layer protections to attack web applications directly. To address these threats, firewalls must be able to analyze individual users, devices, and data assets as they travel through complex enterprise networks. Next-generation firewalls achieve this by looking beyond the port and protocol data of individual packets and sessions. This grants visibility into sophisticated threats that simpler firewalls would overlook. For example, a traditional firewall may block traffic from an IP address known for conducting denial-of-service attacks. Hackers can bypass this by continuously changing IP addresses to confuse and overload the firewall, which may allow routing malicious traffic to vulnerable assets. A next-generation firewall may notice that all this incoming traffic carries the same malicious content. It may act as a TCP proxy and limit the number of new connections made per second. When illegitimate connections fail the TCP handshake, it can simply drop them without causing the organization’s internal systems to overload. This is just one example of what next-gen firewalls are capable of. Most modern firewall products combine a wide variety of technologies to provide comprehensive perimeter security against comprehensive cyber attacks. How do NGFWs Enhance Network Security? Deep Packet Inspection (DPI) : NGFWs go beyond basic packet filtering by inspecting the content of data packets. They analyze the actual data payload and not just header information. This allows them to identify and block threats within the packet content, such as malware, viruses, and suspicious patterns. Application-Level Control : NGFWs can identify and control applications and services running on the network. This enables administrators to define and enforce policies based on specific applications, rather than just port numbers. For example, you can allow or deny access to social media sites or file-sharing applications. Intrusion Prevention Systems (IPS) : NGFWs often incorporate intrusion prevention capabilities. They can detect and prevent known and emerging cyber threats by comparing network traffic patterns against a database of known attack signatures. This proactive approach helps protect against various cyberattacks. Advanced Threat Detection: NGFWs use behavioral analysis and heuristics to detect and block unknown or zero-day threats. By monitoring network traffic for anomalies, they can identify suspicious behavior and take action to mitigate potential threats. U ser and Device Identification : NGFWs can associate network traffic with specific users or devices, even in complex network environments. This user/device awareness allows for more granular security policies and helps in tracking and responding to security incidents effectively. Integration with Security Ecosystem : NGFWs often integrate with other security solutions, such as antivirus software, intrusion detection systems (IDS), and security information and event management (SIEM) systems. This collaborative approach provides a multi-layered defense strategy . Security Automation : NGFWs can automate threat response and mitigation. For example, they can isolate compromised devices from the network or initiate other predefined actions to contain threats swiftly. In a multi-layered security environment, these firewalls often enforce the policies established by security orchestration, automation, and response (SOAR) platforms. Content Filtering : NGFWs can filter web content, providing URL filtering and content categorization. This helps organizations enforce internet usage policies and block access to potentially harmful or inappropriate websites. Some NGFWs can even detect outgoing user credentials (like an employee’s Microsoft account password) and prevent that content from leaving the network. VPN and Secure Remote Access : NGFWs often include VPN capabilities to secure remote connections. This is crucial for ensuring the security of remote workers and branch offices. Advanced firewalls may also be able to identify malicious patterns in external VPN traffic, protecting organizations from threat actors hiding behind encrypted VPN providers. Cloud-Based Threat Intelligence : Many NGFWs leverage cloud-based threat intelligence services to stay updated with the latest threat information. This real-time threat intelligence helps NGFWs identify and block emerging threats more effectively. Scalability and Performance : NGFWs are designed to handle the increasing volume of network traffic in modern networks. They offer improved performance and scalability, ensuring that security does not compromise network speed. Logging and Reporting : NGFWs generate detailed logs and reports of network activity. These logs are valuable for auditing, compliance, and forensic analysis, helping organizations understand and respond to security incidents. 3. Proxy Firewalls Proxy firewalls are also called application-level gateways or gateway firewalls. They define which applications a network can support, increasing security but demanding continuous attention to maintain network functionality and efficiency. Proxy firewalls provide a single point of access allowing organizations to assess the threat posed by the applications they use. It conducts deep packet inspection and uses proxy-based architecture to mitigate the risk of Application Layer attacks. Many organizations use proxy servers to segment the parts of their network most likely to come under attack. Proxy firewalls can monitor the core internet protocols these servers use against every application they support. The proxy firewall centralizes application activity into a single server and provides visibility into each data packet processed. This allows the organization to maintain a high level of security on servers that make tempting cyberattack targets. However, these servers won’t be able to support new applications without additional firewall configuration. These types of firewalls work well in highly segmented networks that allow organizations to restrict access to sensitive data without impacting usability and production. 4. Hardware Firewalls Hardware firewalls are physical devices that secure the flow of traffic between devices in a network. Before cloud computing became prevalent, most firewalls were physical hardware devices. Now, organizations can choose to secure on-premises network infrastructure using hardware firewalls that manage the connections between routers, switches, and individual devices. While the initial cost of acquiring and configuring a hardware firewall can be high, the ongoing overhead costs are smaller than what software firewall vendors charge (often an annual license fee). This pricing structure makes it difficult for growing organizations to rely entirely on hardware devices. There is always a chance that you end up paying for equipment you don’t end up using at full capacity. Hardware firewalls offer a few advantages over software firewalls: They avoid using network resources that could otherwise go to value-generating tasks. They may end up costing less over time than a continuously renewed software firewall subscription fee. Centralized logging and monitoring can make hardware firewalls easier to manage than complex software-based deployments. 5. Software Firewalls Many firewall vendors provide virtualized versions of their products as software. They typically charge an annual licensing fee for their firewall-as-a-service product, which runs on any suitably provisioned server or device. Some software firewall configurations require the software to be installed on every computer in the network, which can increase the complexity of deployment and maintenance over time. If firewall administrators forget to update a single device, it may become a security vulnerability. At the same time, these firewalls don’t have their own operating systems or dedicated system resources available. They must draw computing power and memory from the devices they are installed on. This leaves less power available for mission-critical tasks. However, software firewalls carry a few advantages compared to hardware firewalls: The initial subscription-based cost is much lower, and many vendors offer a price structure that ensures you don’t pay for resources you don’t use. Software firewalls do not take up any physical space, making them ideal for smaller organizations. The process of deploying software firewalls often only takes a few clicks. With hardware firewalls, the process can involve complex wiring and time-consuming testing. Advanced Threats and Firewall Solutions Most firewalls are well-equipped to block simple threats, but advanced threats can still cause problems. There are many different types of advanced threats designed to bypass standard firewall policies. Advanced Persistent Threats (APTs) often compromise high-level user accounts and slowly spread throughout the network using lateral movement. They may move slowly, gathering information and account credentials over weeks or months before exfiltrating the data undetected. By moving slowly, these threats avoid triggering firewall rules. Credential-based attacks bypass simple firewall rules by using genuine user credentials to carry out attacks. Since most firewall policies trust authenticated users, attackers can easily bypass rules by stealing user account credentials. Simple firewalls can’t distinguish between normal traffic and malicious traffic by an authenticated, signed-in user. Malicious insiders can be incredibly difficult to detect. These are genuine, authenticated users who have decided to act against the organization’s interest. They may already know how the firewall system works, or have privileged access to firewall configurations and policies. Combination attacks may target multiple security layers with separate, independent attacks. For example, your cloud-based firewalls may face a Distributed Denial of Service (DDoS) attack while a malicious insider exfiltrates information from the cloud. These tactics allow hackers to coordinate attacks and cover their tracks. Only next-generation firewalls have security features that can address these types of attack. Anti-data exfiltration tools may prevent users from sending their login credentials to unsecured destinations, or prevent large-scale data exfiltration altogether. Identity-based policies may block authenticated users from accessing assets they do not routinely use. Firewall Configuration and Security Policies The success of any firewall implementation is determined by the quality of its security rules. These rules decide which types of traffic the firewall will allow to pass, and what traffic it will block. In a modern network environment, this is done using four basic types of firewall rules: Access Control Lists (ACLs). These identify the users who have permission to access a certain resource or asset. They may also dictate which operations are allowed on that resource or asset. Network Address Translation (NAT) rules. These rules protect internal devices by hiding their original IP address from the public Internet. This makes it harder for hackers to gain unauthorized access to system resources because they can’t easily target individual devices from outside the network. Stateful packet filtering . This is the process of inspecting data packets in each connection and determining what to do with data flows that do not appear genuine. Stateful firewalls keep track of existing connections, allowing them to verify the authentication of incoming data that claims to be part of an already established connection. Application-level gateways. These firewall rules provide application-level protection, preventing hackers from disguising malicious traffic as data from (or for) an application. To perform this kind of inspection, the firewall must know what normal traffic looks like for each application on the network, and be able to match incoming traffic with those applications. Network Performance and Firewalls Firewalls can impact network performance and introduce latency into networks. Optimizing network performance with firewalls is a major challenge in any firewall implementation project. Firewall experts use a few different approaches to reduce latency and maintain fast, reliable network performance: Installing hardware firewalls on high-volume routes helps, since separate physical devices won’t draw computing resources away from other network devices. Using software firewalls in low-volume situations where flexibility is important. Sometimes, being able to quickly configure firewall rules to adapt to changing business conditions can make a major difference in overall network performance. Configuring servers to efficiently block unwanted traffic is a continuous process. Server administrators should avoid overloading firewalls with denied outbound requests that strain firewalls at the network perimeter. Firewall administrators should try to distribute unwanted traffic across multiple firewalls and routers instead of allowing it to concentrate on one or two devices. They should also try reducing the complexity of the firewall rule base and minimize overlapping rules. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Operation “Red Kangaroo”: Industry’s First Dynamic Analysis of 4M Public Docker Container Images
Linux containers aren’t new. In fact, this technology was invented 20 years ago. In 2013, Docker entered the scene and revolutionized... Cloud Security Operation “Red Kangaroo”: Industry’s First Dynamic Analysis of 4M Public Docker Container Images Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/1/20 Published Linux containers aren’t new. In fact, this technology was invented 20 years ago. In 2013, Docker entered the scene and revolutionized Linux containers by offering an easy-to-use command line interface (CLI), an engine, and a registry server. Combined, these technologies have concealed all the complexity of building and running containers, by offering one common industry standard . As a result, Docker’s popularity has sky-rocketed, rivalling Virtual Machines, and transforming the industry. In order to locate and share Docker container images, Docker is offering a service called Docker Hub . Its main feature, repositories , allows the development community to push (upload) and pull (download) container images. With Docker Hub, anyone in the world can download and execute any public image, as if it was a standalone application. Today, Docker Hub accounts over 4 million public Docker container images . With 8 billion pulls (downloads) in January 2020 and growing , its annualized image pulls should top 100 billion this year. For comparison , Google Play has 2.7M Android apps in its store, with a download rate of 84 billion downloads a year. How many container images currently hosted at Docker Hub are malicious or potentially harmful? What sort of damage can they inflict? What if a Docker container image downloaded and executed malware at runtime? Is there a reliable way to tell that? What if a compromised Docker container image was downloaded by an unsuspecting customer and used as a parent image to build and then deploy a new container image into production, practically publishing an application with a backdoor built into it? Is there any way to stop that from happening? At Prevasio, we asked ourselves these questions multiple times. What we decided to do has never been done before. The Challenge At Prevasio, we have built a dynamic analysis sandbox that uses the same principle as a conventional sandbox that ‘detonates’ malware in a safe environment. The only difference is that instead of ‘detonating’ an executable file, such as a Windows PE file or a Linux ELF binary, Prevasio Analyzer first pulls (downloads) an image from any container registry, and then ‘detonates’ it in its own virtual environment, outside the organization/customer infrastructure. Using our solution, we then dynamically analyzed all 4 million container images hosted at Docker Hub. In order to handle such a massive volume of images, Prevasio Analyzer was executed non-stop for a period of one month on 800 machines running in parallel. The result of our dynamic scan reveals that: 51 percent of all containers had “critical” vulnerabilities, while 13 percent were classified as “high” and four percent as “moderate” vulnerabilities. Six thousand containers were riddled with cryptominers, hacking tools/pen testing frameworks, and backdoor trojans. While many cryptominers and hacking tools may not be malicious per se, they present a potentially unwanted issue to an enterprise. More than 400 container images (with nearly 600,000 pulls) of weaponized Windows malware crossing over into the world of Linux. This crossover is directly due to the proliferation of cross-platform code (e.g. GoLang, .NET Core and PowerShell Core). Our analysis of malicious containers also shows that quite a few images contain a dynamic payload. That is, an image in its original form does not have a malicious binary. However, at runtime, it might be scripted to download a source of a coinminer, to then compile and execute it. A dynamic analysis sandbox, such as Prevasio Analyzer, is the only solution that provides a behavioral analysis of Docker containers. It is built to reveal malicious intentions of Docker containers by executing them in its own virtual environment, revealing a full scope of their behavior. The whitepaper with our findings is available here . Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | How To Reduce Attack Surface: 6 Proven Tactics
How To Reduce Attack Surface: 6 Proven Tactics Security-oriented organizations continuously identify, monitor, and manage... Cyber Attacks & Incident Response How To Reduce Attack Surface: 6 Proven Tactics Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/20/23 Published How To Reduce Attack Surface: 6 Proven Tactics Security-oriented organizations continuously identify, monitor, and manage internet-connected assets to protect them from emerging attack vectors and potential vulnerabilities. Security teams go through every element of the organization’s security posture – from firewalls and cloud-hosted assets to endpoint devices and entry points – looking for opportunities to reduce security risks. This process is called attack surface management. It provides a comprehensive view into the organization’s cybersecurity posture, with a neatly organized list of entry points, vulnerabilities, and weaknesses that hackers could exploit in a cyberattack scenario. Attack surface reduction is an important element of any organization’s overall cybersecurity strategy. Security leaders who understand the organization’s weaknesses can invest resources into filling the most critical gaps first and worrying about low-priority threats later. What assets make up your organization’s attack surface? Your organization’s attack surface is a detailed list of every entry point and vulnerability that an attacker could exploit to gain unauthorized access. The more entry points your network has, the larger its attack surface will be. Most security leaders divide their attention between two broad types of attack surfaces: The digital attack surface This includes all network equipment and business assets used to transfer, store, and communicate information. It is susceptible to phishing attempts , malware risks, ransomware attacks, and data breaches. Cybercriminals may infiltrate these kinds of assets by bypassing technical security controls, compromising unsecured apps or APIs, or guessing weak passwords. The physical attack surface This includes business assets that employees, partners, and customers interact with physically. These might include hardware equipment located inside data centers and USB access points. Even access control systems for office buildings and other non-cyber threats may be included. These assets can play a role in attacks that involve social engineering, insider threats, and other malicious actors who work in-person. Even though both of these attack surfaces are distinct, many of their security vulnerabilities and potential entry points overlap in real-life threat scenarios. For example, thieves might steal laptops from an unsecured retail location and leverage sensitive data on those devices to launch further attacks against the organization’s digital assets. Organizations that take steps to minimize their attack surface area can reduce the risks associated with this kind of threat. Known Assets, Unknown Assets, and Rogue Assets All physical and digital business assets fall into one of three categories: Known assets are apps, devices, and systems that the security team has authorized to connect to the organization’s network. These assets are included in risk assessments and they are protected by robust security measures, like network segmentation and strict permissions. Unknown assets include systems and web applications that the security team is not aware of. These are not authorized to access the network and may represent a serious security threat. Shadow IT applications may be part of this category, as well as employee-owned mobile devices storing sensitive data and unsecured IoT devices. Rogue assets connect to the network without authorization, but they are known to security teams. These may include unauthorized user accounts, misconfigured assets, and unpatched software. A major part of properly managing your organization’s attack surface involves the identification and remediation of these risks. Attack Vectors Explained: Minimize Risk by Following Potential Attack Paths When conducting attack surface analysis, security teams have to carefully assess the way threat actors might discover and compromise the organization’s assets while carrying out their attack. This requires the team to combine elements of vulnerability management with risk management , working through the cyberattack kill chain the way a hacker might. Some cybercriminals leverage technical vulnerabilities in operating systems and app integrations. Others prefer to exploit poor identity access management policies, or trick privileged employees into giving up their authentication credentials. Many cyberattacks involve multiple steps carried out by different teams of threat actors. For example, one hacker may specialize in gaining initial access to secured networks while another focuses on using different tools to escalate privileges. To successfully reduce your organization’s attack surface, you must follow potential attacks through these steps and discover what their business impact might be. This will provide you with the insight you need to manage newly discovered vulnerabilities and protect business assets from cyberattack. Some examples of common attack vectors include: API vulnerabilities. APIs allow organizations to automate the transfer of data, including scripts and code, between different systems. Many APIs run on third-party servers managed by vendors who host and manage the software for customers. These interfaces can introduce vulnerabilities that internal security teams aren’t aware of, reducing visibility into the organization’s attack surface. Unsecured software plugins. Plugins are optional add-ons that enhance existing apps by providing new features or functionalities. They are usually made by third-party developers who may require customers to send them data from internal systems. If this transfer is not secured, hackers may intercept it and use that information to attack the system. Unpatched software. Software developers continuously release security patches that address emerging threats and vulnerabilities. However, not all users implement these patches the moment they are released. This delay gives attackers a key opportunity to learn about the vulnerability (which is as easy as reading the patch changelog) and exploit it before the patch is installed. Misconfigured security tools. Authentication systems, firewalls, and other security tools must be properly configured in order to produce optimal security benefits. Attackers who discover misconfigurations can exploit those weaknesses to gain entry to the network. Insider threats. This is one of the most common attack vectors, yet it can be the hardest to detect. Any employee entrusted with sensitive data could accidentally send it to the wrong person, resulting in a data breach. Malicious insiders may take steps to cover their tracks, using their privileged permissions and knowledge of the organization to go unnoticed. 6 Tactics for Reducing Your Attack Surface 1. Implement Zero Trust The Zero Trust security model assumes that data breaches are inevitable and may even have already occurred. This adds new layers to the problems that attack surface management resolves, but it can dramatically improve overall resilience and preparedness. When you develop your security policies using the Zero Trust framework, you impose strong limits on what hackers can and cannot do after gaining initial access to your network. Zero Trust architecture blocks attackers from conducting lateral movement, escalating their privileges, and breaching critical data. For example, IoT devices are a common entry point into many networks because they don’t typically benefit from the same level of security that on-premises workstations receive. At the same time, many apps and systems are configured to automatically trust connections from internet-enabled sensors and peripheral devices. Under a Zero Trust framework, these connections would require additional authentication. The systems they connect to would also need to authenticate themselves before receiving data. Multi-factor authentication is another part of the Zero Trust framework that can dramatically improve operational security. Without this kind of authentication in place, most systems have to accept that anyone with the right username and password combination must be a legitimate user. In a compromised credential scenario, this is obviously not the case. Organizations that develop network infrastructure with Zero Trust principles in place are able to reduce the number of entry points their organization exposes to attackers and reduce the value of those entry points. If hackers do compromise parts of the network, they will be unable to quickly move between different segments of the network, and may be unable to stay unnoticed for long. 2. Remove Unnecessary Complexity Unknown assets are one of the main barriers to operational security excellence. Security teams can’t effectively protect systems, apps, and users they don’t have detailed information on. Any rogue or unknown assets the organization is responsible for are almost certainly attractive entry points for hackers. Arbitrarily complex systems can be very difficult to document and inventory properly . This is a particularly challenging problem for security leaders working for large enterprises that grow through acquisitions. Managing a large portfolio of acquired companies can be incredibly complex, especially when every individual company has its own security systems, tools, and policies to take into account. Security leaders generally don’t have the authority to consolidate complex systems on their own. However, you can reduce complexity and simplify security controls throughout the environment in several key ways: Reduce the organization’s dependence on legacy systems. End-of-life systems that no longer receive maintenance and support should be replaced with modern equivalents quickly. Group assets, users, and systems together. Security groups should be assigned on the basis of least privileged access, so that every user only has the minimum permissions necessary to achieve their tasks. Centralize access control management. Ad-hoc access control management quickly leads to unknown vulnerabilities and weaknesses popping up unannounced. Implement a robust identity access management system so you can create identity-based policies for managing user access. 3. Perform Continuous Vulnerability Monitoring Your organization’s attack surface is constantly changing. New threats are emerging, old ones are getting patched, and your IT environment is supporting new users and assets on a daily basis. Being able to continuously monitor these changes is one of the most important aspects of Zero Trust architecture . The tools you use to support attack surface management should also generate alerts when assets get exposed to known risks. They should allow you to confirm the remediation of detected risks, and provide ample information about the risks they uncover. Some of the things you can do to make this happen include: Investing in a continuous vulnerability monitoring solution. Vulnerability scans are useful for finding out where your organization stands at any given moment. Scheduling these scans to occur at regular intervals allows you to build a standardized process for vulnerability monitoring and remediation. Building a transparent network designed for visibility. Your network should not obscure important security details from you. Unfortunately, this is what many third-party security tools and services achieve. Make sure both you and your third-party security partners are invested in building observability into every aspect of your network. Prioritize security expenditure based on risk. Once you can observe the way users, data, and assets interact on the network, you can begin prioritizing security initiatives based on their business impact. This allows you to focus on high-risk tasks first. 4. Use Network Segmentation to Your Advantage Network segmentation is critical to the Zero Trust framework. When your organization’s different subnetworks are separated from one another with strictly protected boundaries, it’s much harder for attackers to travel laterally through the network. Limiting access between parts of the network helps streamline security processes while reducing risk. There are several ways you can segment your network. Most organizations already perform some degree of segmentation by encrypting highly classified data. Others enforce network segmentation principles when differentiating between production and live development environments. But in order for organizations to truly benefit from network segmentation, security leaders must carefully define boundaries between every segment and enforce authentication policies designed for each boundary. This requires in-depth knowledge of the business roles and functions of the users who access those segments, and the ability to configure security tools to inspect and enforce access control rules. For example, any firewall can block traffic between two network segments. A next-generation firewall can conduct identity-based inspection that allows traffic from authorized users through – even if they are using mobile devices the firewall has never seen before. 5. Implement a Strong Encryption Policy Encryption policies are an important element of many different compliance frameworks . HIPAA, PCI-DSS, and many other regulatory frameworks specify particular encryption policies that organizations must follow to be compliant. These standards are based on the latest research in cryptographic security and threat intelligence reports that outline hackers’ capabilities. Even if your organization is not actively seeking regulatory compliance, you should use these frameworks as a starting point for building your own encryption policy. Your organization’s risk profile is largely the same whether you seek regulatory certification or not – and accidentally deploying outdated encryption policies can introduce preventable vulnerabilities into an otherwise strong security posture. Your organization’s encryption policy should detail every type of data that should be encrypted and the cipher suite you’ll use to encrypt that data. This will necessarily include critical assets like customer financial data and employee payroll records, but it also includes relatively low-impact assets like public Wi-Fi connections at retail stores. In each case, you must implement a modern cipher suite that meets your organization’s security needs and replace legacy devices that do not support the latest encryption algorithms. This is particularly important in retail and office settings, where hardware routers, printers, and other devices may no longer support secure encryption. 6. Invest in Employee Training To truly build security resilience into any company culture, it’s critical to explain why these policies must be followed, and what kinds of threats they address. One of the best ways to administer standardized security compliance training is by leveraging a corporate learning platform across the organization, so that employees can actually internalize these security policies through scenario based training courses. It’s especially valuable in organizations suffering from consistent shadow IT usage. When employees understand the security vulnerabilities that shadow IT introduces into the environment, they’re far less likely to ignore security policies for the sake of convenience. Security simulations and awareness campaigns can have a significant impact on training initiatives. When employees know how to identify threat actors at work, they are much less likely to fall victim to them. However, actually achieving meaningful improvement may require devoting a great deal of time and energy into phishing simulation exercises over time – not everyone is going to get it right in the first month or two. These initiatives can also provide clear insight and data on how prepared your employees are overall. This data can make a valuable contribution to your attack surface reduction campaign. You may be able to pinpoint departments – or even individual users – who need additional resources and support to improve their resilience against phishing and social engineering attacks. Successfully managing this aspect of your risk assessment strategy will make it much harder for hackers to gain control of privileged administrative accounts. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Securely accelerating application delivery
In this guest blog, Jeff Yager from IT Central Station (soon to be PeerSpot), discusses how actual AlgoSec users have been able to... Security Policy Management Securely accelerating application delivery Jeff Yeger 2 min read Jeff Yeger Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/15/21 Published In this guest blog, Jeff Yager from IT Central Station (soon to be PeerSpot), discusses how actual AlgoSec users have been able to securely accelerate their app delivery. These days, it is more important than ever for business owners, application owners, and information security professionals to speak the same language. That way, their organizations can deliver business applications more rapidly while achieving a heightened security posture. AlgoSec’s patented platform enables the world’s most complex organizations to gain visibility and process changes at zero-touch across the hybrid network. IT Central Station members discussed these benefits of AlgoSec , along with related issues, in their reviews on the site. Application Visibility AlgoSec allows users to discover, identify, map, and analyze business applications and security policies across their entire networks. For instance, Jacob S., an IT security analyst at a retailer, reported that the overall visibility that AlgoSec gives into his network security policies is high. He said, “It’s very clever in the logic it uses to provide insights, especially into risks and cleanup tasks . It’s very valuable. It saved a lot of hours on the cleanup tasks for sure. It has saved us days to weeks.” “AlgoSec absolutely provides us with full visibility into the risk involved in firewall change requests,” said Aaron Z. a senior network and security administrator at an insurance company that deals with patient health information that must be kept secure. He added, “There is a risk analysis piece of it that allows us to go in and run that risk analysis against it, figuring out what rules we need to be able to change, then make our environment a little more secure. This is incredibly important for compliance and security of our clients .” Also impressed with AlgoSec’s overall visibility into network security policies was Christopher W., a vice president – head of information security at a financial services firm, who said, “ What AlgoSec does is give me the ability to see everything about the firewall : its rules, configurations and usage patterns.” AlgoSec gives his team all the visibility they need to make sure they can keep the firewall tight. As he put it, “There is no perimeter anymore. We have to be very careful what we are letting in and out, and Firewall Analyzer helps us to do that.” For a cyber security architect at a tech services company, the platform helps him gain visibility into application connectivity flows. He remarked, “We have Splunk, so we need a firewall/security expert view on top of it. AlgoSec gives us that information and it’s a valuable contributor to our security environment.” Application Changes and Requesting Connectivity AlgoSec accelerates application delivery and security policy changes with intelligent application connectivity and change automation. A case in point is Vitas S., a lead infrastructure engineer at a financial services firm who appreciates the full visibility into the risk involved in firewall change requests. He said, “[AlgoSec] definitely allows us to drill down to the level where we can see the actual policy rule that’s affecting the risk ratings. If there are any changes in ratings, it’ll show you exactly how to determine what’s changed in the network that will affect it. It’s been very clear and intuitive.” A senior technical analyst at a maritime company has been equally pleased with the full visibility. He explained, “That feature is important to us because we’re a heavily risk-averse organization when it comes to IT control and changes. It allows us to verify, for the most part, that the controls that IT security is putting in place are being maintained and tracked at the security boundaries .” A financial services firm with more than 10 cluster firewalls deployed AlgoSec to check the compliance status of their devices and reduce the number of rules in each of the policies. According to Mustafa K. their network security engineer, “Now, we can easily track the changes in policies. With every change, AlgoSec automatically sends an email to the IT audit team. It increases our visibility of changes in every policy .” Speed and Automation The AlgoSec platform automates application connectivity and security policy across a hybrid network so clients can move quickly and stay secure. For Ilya K., a deputy information security department director at a computer software company, utilizing AlgoSec translates into an increase in security and accuracy of firewall rules. He said, “ AlgoSec ASMS brings a holistic view of network firewall policy and automates firewall security management in very large-sized environments. Additionally, it speeds up the changes in firewall rules with a vendor-agnostic approach.” “The user receives the information if his request is within the policies and can continue the request,” said Paulo A., a senior information technology security analyst at an integrator. He then noted, “Or, if it is denied, the applicant must adjust their request to stay within the policies. The time spent for this without AlgoSec is up to one week, whereas with AlgoSec, in a maximum of 15 minutes we have the request analyzed .” The results of this capability include greater security, a faster request process and the ability to automate the implementation of rules. Srdjan, a senior technical and integration designer at a large retailer, concurred when he said, “ By automating some parts of the work, business pressure is reduced since we now deliver much faster . I received feedback from our security department that their FCR approval process is now much easier. The network team is also now able to process FCRs much faster and with more accuracy.” To learn more about what IT Central Station members think about AlgoSec, visit https://www.itcentralstation.com/products/algosec-reviews Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Taking Control of Network Security Policy
In this guest blog, Jeff Yager from IT Central Station describes how AlgoSec is perceived by real users and shares how the solution meets... Security Policy Management Taking Control of Network Security Policy Jeff Yeger 2 min read Jeff Yeger Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/30/21 Published In this guest blog, Jeff Yager from IT Central Station describes how AlgoSec is perceived by real users and shares how the solution meets their expectations for visibility and monitoring. Business-driven visibility A network and security engineer at a comms service provider agreed, saying, “ The complete and end-to-end visibility and analysis [AlgoSec] provides of the policy rule base is invaluable and saves countless time and effort .” On a related front, according to Srdjan, a senior technical and integration designer at a major retailer, AlgoSec provides a much easier way to process first call resolutions (FCRs) and get visibility into traffic. He said, “With previous vendors, we had to guess what was going on with our traffic and we were not able to act accordingly. Now, we have all sorts of analyses and reports. This makes our decision process, firewall cleanup, and troubleshooting much easier.” Organizations large and small find it imperative to align security with their business processes. AlgoSec provides unified visibility of security across public clouds, software-defined and on-premises networks, including business applications and their connectivity flows. For Mark G., an IT security manager at a sports company, the solution handles rule-based analysis . He said, “AlgoSec provides great unified visibility into all policy packages in one place. We are tracking insecure changes and getting better visibility into network security environment – either on-prem, cloud or mixed.” Notifications are what stood out to Mustafa K., a network security engineer at a financial services firm. He is now easily able to track changes in policies with AlgoSec , noting that “with every change, it automatically sends an email to the IT audit team and increases our visibility of changes in every policy.” Security policy and network analysis AlgoSec’s Firewall Analyzer delivers visibility and analysis of security policies, and enables users to discover, identify, and map business applications across their entire hybrid network by instantly visualizing the entire security topology – in the cloud, on-premises, and everything in between. “It is definitely helpful to see the details of duplicate rules on the firewall,” said Shubham S., a senior technical consultant at a tech services company. He gets a lot of visibility from Firewall Analyzer. As he explained, “ It can define the connectivity and routing . The solution provides us with full visibility into the risk involved in firewall change requests.” A user at a retailer with more than 500 firewalls required automation and reported that “ this was the best product in terms of the flexibility and visibility that we needed to manage [the firewalls] across different regions . We can modify policy according to our maintenance schedule and time zones.” A network & collaboration engineer at a financial services firm likewise added that “ we now have more visibility into our firewall and security environment using a single pane of glass. We have a better audit of what our network and security engineers are doing on each device and are now able to see how much we are compliant with our baseline.” Arieh S., a director of information security operations at a multinational manufacturing company, also used Tufin, but prefers AlgoSec, which “ provides us better visibility for high-risk firewall rules and ease of use.” “If you are looking for a tool that will provide you clear visibility into all the changes in your network and help people prepare well with compliance, then AlgoSec is the tool for you,” stated Miracle C., a security analyst at a security firm. He added, “Don’t think twice; AlgoSec is the tool for any company that wants clear analysis into their network and policy management.” Monitoring and alerts Other IT Central Station members enjoy AlgoSec’s monitoring and alerts features. Sulochana E., a senior systems engineer at an IT firm, said, “ [AlgoSec] provides real-time monitoring , or at least close to real time. I think that is important. I also like its way of organizing. It is pretty clear. I also like their reporting structure – the way we can use AlgoSec to clear a rule base, like covering and hiding rules.” For example, if one of his customers is concerned about different standards , like ISO or PZI levels, they can all do the same compliance from AlgoSec. He added, “We can even track the change monitoring and mitigate their risks with it. You can customize the workflows based on their environment. I find those features interesting in AlgoSec.” AlgoSec helps in terms of firewall monitoring. That was the use case that mattered for Alberto S., a senior networking engineer at a manufacturing company. He said, “ Automatic alerts are sent to the security team so we can react quicker in case something goes wrong or a threat is detected going through the firewall. This is made possible using the simple reports.” Sulochana E. concluded by adding that “AlgoSec has helped to simplify the job of security engineers because you can always monitor your risks and know that your particular configurations are up-to-date, so it reduces the effort of the security engineers.” To learn more about what IT Central Station members think about AlgoSec, visit our reviews page . To schedule your personal AlgoSec demo or speak to an AlgoSec security expert, click here . Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | The shocking truth about Network Cloud Security in 2025
The cloud's come a long way, baby. Remember when it was just a buzzword tossed around in boardrooms? Now, it's the engine powering our... Cloud Network Security The shocking truth about Network Cloud Security in 2025 Iris Stein 2 min read Iris Stein Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 2/10/25 Published The cloud's come a long way, baby. Remember when it was just a buzzword tossed around in boardrooms? Now, it's the engine powering our digital world. But this rapid evolution has left many cloud network security managers grappling with a new reality – and a bit of an identity crisis. Feeling the heat? You're not alone. The demands on cloud security professionals are skyrocketing. We're expected to be masters of hybrid environments, navigate a widening skills gap, and stay ahead of threats evolving at warp speed. Let's break down the challenges: Hybrid is the new normal: Gartner predicts that by 2025, a whopping 90% of organizations will be running hybrid cloud environments. This means juggling the complexities of both on-premises and cloud security, demanding a broader skillset and a more holistic approach. Forget silos – we need to be fluent in both worlds. The skills gap is a chasm: (ISC)²'s 2022 Cybersecurity Workforce Study revealed a global cybersecurity workforce gap of 3.4 million. This talent shortage puts immense pressure on existing security professionals to do more with less. We're stretched thin, and something's gotta give. Threats are evolving faster than ever: The cloud introduces new attack vectors and vulnerabilities we haven't even imagined yet. McAfee reported a staggering 630% increase in cloud-native attacks in 2022. Staying ahead of these threats requires constant vigilance, continuous learning, and a proactive mindset. Level up your cloud security game So, how can you thrive in this chaotic environment and ensure your career (and your company's security posture) doesn't go down in flames? Here's your survival guide: Automate or die: Manual processes are a relic of the past. Embrace automation tools to manage complex security policies, respond to threats faster, and free up your time for strategic initiatives. Think of it as your force multiplier in the fight against complexity. Become a cloud-native ninja: Deepen your understanding of cloud platforms like AWS, Azure, and GCP. Master their security features, best practices, and quirks. The more you know, the more you can protect. Sharpen your soft skills: Technical chops alone won't cut it. Communication, collaboration, and problem-solving are critical. You need to clearly articulate security risks to stakeholders, build bridges with different teams, and drive solutions. Never stop learning: The cloud is a moving target. Continuous learning is no longer optional – it's essential. Attend conferences, devour online courses, and stay informed about the latest security trends and technologies. Complacency is the enemy. Introducing AlgoSec Cloud Enterprise (ACE): Your cloud security wingman Let's face it, managing security across a hybrid cloud environment can feel like herding cats. That's where AlgoSec Cloud Enterprise (ACE) steps in. ACE is a comprehensive cloud network security suite that gives you the visibility, automation, and control you need to secure your applications and keep the business humming. Gain X-Ray Vision into Your Hybrid Cloud: See everything, know everything. ACE gives you complete visibility across your entire environment, from on-premises servers to cloud platforms. No more blind spots, no more surprises. Enforce Security Policies Like a Boss: Consistent security policies are the bedrock of a strong security posture. ACE makes it easy to define and enforce policies across all your applications, no matter where they reside. Conquer Compliance with Confidence: Staying compliant can feel like a never-ending struggle. ACE simplifies compliance management across your hybrid environment, helping you meet regulatory requirements without breaking a sweat. Accelerate App Delivery Without Sacrificing Security: In today's fast-paced world, speed is key. ACE empowers you to accelerate application delivery without compromising security. Move fast, break things – but not your security posture. Proactive Risk Prevention: ACE goes beyond basic security checks with over 150+ network security policy risk checks, proactively identifying and mitigating potential vulnerabilities before they can be exploited. Ready to unlock the true power of the cloud while fortifying your defenses? Learn more about AlgoSec Cloud Enterprise today and take control of your cloud security destiny. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | How to Perform a Network Security Risk Assessment in 6 Steps
For your organization to implement robust security policies, it must have clear information on the security risks it is exposed to. An... Uncategorized How to Perform a Network Security Risk Assessment in 6 Steps Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 1/18/24 Published For your organization to implement robust security policies, it must have clear information on the security risks it is exposed to. An effective IT security plan must take the organization’s unique set of systems and technologies into account. This helps security professionals decide where to deploy limited resources for improving security processes. Cybersecurity risk assessments provide clear, actionable data about the quality and success of the organization’s current security measures. They offer insight into the potential impact of security threats across the entire organization, giving security leaders the information they need to manage risk more effectively. Conducting a comprehensive cyber risk assessment can help you improve your organization’s security posture, address security-related production bottlenecks in business operations, and make sure security team budgets are wisely spent. This kind of assessment is also a vital step in the compliance process . Organizations must undergo information security risk assessments in order to meet regulatory requirements set by different authorities and frameworks, including: The Health Insurance Portability and Accountability Act (HIPAA), The International Organization for Standardization (ISO) The National Institute of Standards and Technology (NIST) Cybersecurity Framework The Payment Card Industry Data Security Standard (PCI DSS) General Data Protection Regulation (GDPR) What is a Security Risk Assessment? Your organization’s security risk assessment is a formal document that identifies, evaluates, and prioritizes cyber threats according to their potential impact on business operations. Categorizing threats this way allows cybersecurity leaders to manage the risk level associated with them in a proactive, strategic way. The assessment provides valuable data about vulnerabilities in business systems and the likelihood of cyber attacks against those systems. It also provides context into mitigation strategies for identified risks, which helps security leaders make informed decisions during the risk management process. For example, a security risk assessment may find that the organization needs to be more reliant on its firewalls and access control solutions . If a threat actor uses phishing or social engineering to bypass these defenses (or take control of them entirely), the entire organization could suffer a catastrophic data breach. In this case, the assessment may recommend investing in penetration testing and advanced incident response capabilities. Organizations that neglect to invest in network security risk assessments won’t know their weaknesses until after they are actively exploited. By the time hackers launch a ransomware attack, it’s too late to consider whether your antivirus systems are properly configured against malware. Who Should Perform Your Organization’s Cyber Risk Assessment? A dedicated internal team should take ownership over the risk assessment process . The process will require technical personnel with a deep understanding of the organization’s IT infrastructure. Executive stakeholders should also be involved because they understand how information flows in the context of the organization’s business logic, and can provide broad insight into its risk management strategy . Small businesses may not have the resources necessary to conduct a comprehensive risk analysis internally. While a variety of assessment tools and solutions are available on the market, partnering with a reputable managed security service provider is the best way to ensure an accurate outcome. Adhering to a consistent methodology is vital, and experienced vulnerability assessment professionals ensure the best results. How to Conduct a Network Security Risk Assessment 1. Develop a comprehensive asset map The first step is accurately mapping out your organization’s network assets. If you don’t have a clear idea of exactly what systems, tools, and applications the organization uses, you won’t be able to manage the risks associated with them. Keep in mind that human user accounts should be counted as assets as well. The Verizon 2023 Data Breach Investigation Report shows that the human element is involved in more than a quarter of all data breaches. The better you understand your organization’s human users and their privilege profiles, the more effectively you can protect them from potential threats and secure critical assets effectively. Ideally, all of your organization’s users should be assigned and managed through a centralized system. For Windows-based networks, Active Directory is usually the solution that comes to mind. Your organization may have a different system in place if it uses a different operating system. Also, don’t forget about information assets like trade secrets and intellectual property. Cybercriminals may target these assets in order to extort the organization. Your asset map should show you exactly where these critical assets are stored, and provide context into which users have permission to access them. Log and track every single asset in a central database that you can quickly access and easily update. Assign security value to each asset as you go and categorize them by access level . Here’s an example of how you might want to structure that categorization: Public data. This is data you’ve intentionally made available to the public. It includes web page content, marketing brochures, and any other information of no consequence in a data breach scenario. Confidential data. This data is not publicly available. If the organization shares it with third parties, it is only under a non-disclosure agreement. Sensitive technical or financial information may end up in this category. Internal use only. This term refers to data that is not allowed outside the company, even under non-disclosure terms. It might include employee pay structures, long-term strategy documents, or product research data. Intellectual property. Any trade secrets, issued patents, or copyrighted assets are intellectual property. The value of the organization depends in some way on this information remaining confidential. Compliance restricted data. This category includes any data that is protected by regulatory or legal obligations. For a HIPAA-compliant organization, that would include patient data, medical histories, and protected personal information. This database will be one of the most important security assessment tools you use throughout the next seven steps. 2. Identify security threats and vulnerabilities Once you have a comprehensive asset inventory, you can begin identifying risks and vulnerabilities for each asset. There are many different types of tests and risk assessment tools you can use for this step. Automating the process whenever possible is highly recommended, since it may otherwise become a lengthy and time-consuming manual task. Vulnerability scanning tools can automatically assess your network and applications for vulnerabilities associated with known threats. The scan’s results will tell you exactly what kinds of threats your information systems are susceptible to, and provide some information about how you can remediate them. Be aware that these scans can only determine your vulnerability to known threats. They won’t detect insider threats , zero-day vulnerabilities and some scanners may overlook security tool misconfigurations that attackers can take advantage of. You may also wish to conduct a security gap analysis. This will provide you with comprehensive information about how your current security program compares to an established standard like CMMC or PCI DSS. This won’t help protect against zero-day threats, but it can uncover information security management problems and misconfigurations that would otherwise go unnoticed. To take this step to the next level, you can conduct penetration testing against the systems and assets your organization uses. This will validate vulnerability scan and gap analysis data while potentially uncovering unknown vulnerabilities in the process. Pentesting replicates real attacks on your systems, providing deep insight into just how feasible those attacks may be from a threat actor’s perspective. When assessing the different risks your organization faces, try to answer the following questions: What is the most likely business outcome associated with this risk? Will the impact of this risk include permanent damage, like destroyed data? Would your organization be subject to fines for compliance violations associated with this risk? Could your organization face additional legal liabilities if someone exploited this risk? 3. Prioritize risks according to severity and likelihood Once you’ve conducted vulnerability scans and assessed the different risks that could impact your organization, you will be left with a long list of potential threats. This list will include more risks and hazards than you could possibly address all at once. The next step is to go through the list and prioritize each risk according to its potential impact and how likely it is to happen. If you implemented penetration testing in the previous step, you should have precise data on how likely certain attacks are to take place. Your team will tell you how many steps they took to compromise confidential data, which authentication systems they had to bypass, and what other security functionalities they disabled. Every additional step reduces the likelihood of a cybercriminal carrying out the attack successfully. If you do not implement penetration testing, you will have to conduct an audit to assess the likelihood of attackers exploiting your organization’s vulnerabilities. Industry-wide threat intelligence data can give you an idea of how frequent certain types of attacks are. During this step, you’ll have to balance the likelihood of exploitation with the severity of the potential impact for each risk. This will require research into the remediation costs associated with many cyberattacks. Remediation costs should include business impact – such as downtime, legal liabilities, and reputational damage – as well as the cost of paying employees to carry out remediation tasks. Assigning internal IT employees to remediation tasks implies the opportunity cost of diverting them from their usual responsibilities. The more completely you assess these costs, the more accurate your assessment will be. 4. Develop security controls in response to risks Now that you have a comprehensive overview of the risks your organization is exposed to, you can begin developing security controls to address them. These controls should provide visibility and functionality to your security processes, allowing you to prevent attackers from exploiting your information systems and detect them when they make an attempt. There are three main types of security control available to the typical organization: Physical controls prevent unauthorized access to sensitive locations and hardware assets. Security cameras, door locks, and live guards all contribute to physical security. These controls prevent external attacks from taking place on premises. Administrative controls are policies, practices, and workflows that secure business assets and provide visibility into workplace processes. These are vital for protecting against credential-based attacks and malicious insiders. Technical controls include purpose-built security tools like hardware firewalls, encrypted data storage solutions, and antivirus software. Depending on their configuration, these controls can address almost any type of threat. These categories have further sub-categories that describe how the control interacts with the threat it is protecting against. Most controls protect against more than one type of risk, and many controls will protect against different risks in different ways. Here are some of the functions of different controls that you should keep in mind: Detection-based controls trigger alerts when they discover unauthorized activity happening on the network. Intrusion detection systems (IDS) and security information and event management (SIEM) platforms are examples of detection-based solutions. When you configure one of these systems to detect a known risk, you are implementing a detection-based technical control. Prevention-based controls block unauthorized activity from taking place altogether. Authentication protocols and firewall rules are common examples of prevention-based security controls. When you update your organization’s password policy, you are implementing a prevention-based administrative control. Correction and compensation-based controls focus on remediating the effects of cyberattacks once they occur. Disaster recovery systems and business continuity solutions are examples. When you copy a backup database to an on-premises server, you are establishing physical compensation-based controls that will help you recover from potential threats. 5. Document the results and create a remediation plan Once you’ve assessed your organization’s exposure to different risks and developed security controls to address those risks, you are ready to condense them into a cohesive remediation plan . You will use the data you’ve gathered so far to justify the recommendations you make, so it’s a good idea to present that data visually. Consider creating a risk matrix to show how individual risks compare to one another based on their severity and likelihood. High-impact risks that have a high likelihood of occurring should draw more time and attention than risks that are either low-impact, unlikely, or both. Your remediation plan will document the steps that security teams will need to take when responding to each incident you describe. If multiple options exist for a particular vulnerability, you may add a cost/benefit analysis of multiple approaches. This should provide you with an accurate way to quantify the cost of certain cyberattacks and provide a comparative cost for implementing controls against that type of attack. Comparing the cost of remediation with the cost of implementing controls should show some obvious options for cybersecurity investment. It’s easy to make the case for securing against high-severity, high-likelihood attacks with high remediation costs and low control costs. Implementing security patches is an example of this kind of security control that costs very little but provides a great deal of value in this context. Depending on your organization’s security risk profile, you may uncover other opportunities to improve security quickly. You will probably also find opportunities that are more difficult or expensive to carry out. You will have to pitch these opportunities to stakeholders and make the case for their approval. 6. Implement recommendations and evaluate the effectiveness of your assessment Once you have approval to implement your recommendations, it’s time for action. Your security team can now assign each item in the remediation plan to the team member responsible and oversee their completion. Be sure to allow a realistic time frame for each step in the process to be completed – especially if your team is not actively executing every task on its own. You should also include steps for monitoring the effectiveness of their efforts and documenting the changes they make to your security posture. This will provide you with key performance metrics that you can compare with future network security assessments moving forward, and help you demonstrate the value of your remediation efforts overall. Once you have implemented the recommendations, you can monitor and optimize the performance of your information systems to ensure your security posture adapts to new threats as they emerge. Risk assessments are not static processes, and you should be prepared to conduct internal audits and simulate the impact of configuration changes on your current deployment. You may wish to repeat your risk evaluation and gap analysis step to find out how much your organization’s security posture has changed. You can use automated tools like AlgoSec to conduct configuration simulations and optimize the way your network responds to new and emerging threats. Investing time and energy into these tasks now will lessen the burden of your next network security risk assessment and make it easier for you to gain approval for the recommendations you make in the future. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | NACL best practices: How to combine security groups with network ACLs effectively
Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure... AWS NACL best practices: How to combine security groups with network ACLs effectively Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/28/23 Published Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure infrastructure for Amazon Web Services, while AWS users are responsible for maintaining secure configurations. That requires using multiple AWS services and tools to manage traffic. You’ll need to develop a set of inbound rules for incoming connections between your Amazon Virtual Private Cloud (VPC) and all of its Elastic Compute (EC2) instances and the rest of the Internet. You’ll also need to manage outbound traffic with a series of outbound rules. Your Amazon VPC provides you with several tools to do this. The two most important ones are security groups and Network Access Control Lists (NACLs). Security groups are stateful firewalls that secure inbound traffic for individual EC2 instances. Network ACLs are stateless firewalls that secure inbound and outbound traffic for VPC subnets. Managing AWS VPC security requires configuring both of these tools appropriately for your unique security risk profile. This means planning your security architecture carefully to align it the rest of your security framework. For example, your firewall rules impact the way Amazon Identity Access Management (IAM) handles user permissions. Some (but not all) IAM features can be implemented at the network firewall layer of security. Before you can manage AWS network security effectively , you must familiarize yourself with how AWS security tools work and what sets them apart. Everything you need to know about security groups vs NACLs AWS security groups explained: Every AWS account has a single default security group assigned to the default VPC in every Region. It is configured to allow inbound traffic from network interfaces assigned to the same group, using any protocol and any port. It also allows all outbound traffic using any protocol and any port. Your default security group will also allow all outbound IPv6 traffic once your VPC is associated with an IPv6 CIDR block. You can’t delete the default security group, but you can create new security groups and assign them to AWS EC2 instances. Each security group can only contain up to 60 rules, but you can set up to 2500 security groups per Region. You can associate many different security groups to a single instance, potentially combining hundreds of rules. These are all allow rules that allow traffic to flow according the ports and protocols specified. For example, you might set up a rule that authorizes inbound traffic over IPv6 for linux SSH commands and sends it to a specific destination. This could be different from the destination you set for other TCP traffic. Security groups are stateful, which means that requests sent from your instance will be allowed to flow regardless of inbound traffic rules. Similarly, VPC security groups automatically responses to inbound traffic to flow out regardless of outbound rules. However, since security groups do not support deny rules, you can’t use them to block a specific IP address from connecting with your EC2 instance. Be aware that Amazon EC2 automatically blocks email traffic on port 25 by default – but this is not included as a specific rule in your default security group. AWS NACLs explained: Your VPC comes with a default NACL configured to automatically allow all inbound and outbound network traffic. Unlike security groups, NACLs filter traffic at the subnet level. That means that Network ACL rules apply to every EC2 instance in the subnet, allowing users to manage AWS resources more efficiently. Every subnet in your VPC must be associated with a Network ACL. Any single Network ACL can be associated with multiple subnets, but each subnet can only be assigned to one Network ACL at a time. Every rule has its own rule number, and Amazon evaluates rules in ascending order. The most important characteristic of NACL rules is that they can deny traffic. Amazon evaluates these rules when traffic enters or leaves the subnet – not while it moves within the subnet. You can access more granular data on data flows using VPC flow logs. Since Amazon evaluates NACL rules in ascending order, make sure that you place deny rules earlier in the table than rules that allow traffic to multiple ports. You will also have to create specific rules for IPv4 and IPv6 traffic – AWS treats these as two distinct types of traffic, so rules that apply to one do not automatically apply to the other. Once you start customizing NACLs, you will have to take into account the way they interact with other AWS services. For example, Elastic Load Balancing won’t work if your NACL contains a deny rule excluding traffic from 0.0.0.0/0 or the subnet’s CIDR. You should create specific inclusions for services like Elastic Load Balancing, AWS Lambda, and AWS CloudWatch. You may need to set up specific inclusions for third-party APIs, as well. You can create these inclusions by specifying ephemeral port ranges that correspond to the services you want to allow. For example, NAT gateways use ports 1024 to 65535. This is the same range covered by AWS Lambda functions, but it’s different than the range used by Windows operating systems. When creating these rules, remember that unlike security groups, NACLs are stateless. That means that when responses to allowed traffic are generated, those responses are subject to NACL rules. Misconfigured NACLs deny traffic responses that should be allowed, leading to errors, reduced visibility, and potential security vulnerabilities . How to configure and map NACL associations A major part of optimizing NACL architecture involves mapping the associations between security groups and NACLs. Ideally, you want to enforce a specific set of rules at the subnet level using NACLs, and a different set of instance-specific rules at the security group level. Keeping these rulesets separate will prevent you from setting inconsistent rules and accidentally causing unpredictable performance problems. The first step in mapping NACL associations is using the Amazon VPC console to find out which NACL is associated with a particular subnet. Since NACLs can be associated with multiple subnets, you will want to create a comprehensive list of every association and the rules they contain. To find out which NACL is associated with a subnet: Open the Amazon VPC console . Select Subnets in the navigation pane. Select the subnet you want to inspect. The Network ACL tab will display the ID of the ACL associated with that network, and the rules it contains. To find out which subnets are associated with a NACL: Open the Amazon VPC console . Select Network ACLS in the navigation pane. Click over to the column entitled Associated With. Select a Network ACL from the list. Look for Subnet associations on the details pane and click on it. The pane will show you all subnets associated with the selected Network ACL. Now that you know how the difference between security groups and NACLs and you can map the associations between your subnets and NACLs, you’re ready to implement some security best practices that will help you strengthen and simplify your network architecture. 5 best practices for AWS NACL management Pay close attention to default NACLs, especially at the beginning Since every VPC comes with a default NACL, many AWS users jump straight into configuring their VPC and creating subnets, leaving NACL configuration for later. The problem here is that every subnet associated with your VPC will inherit the default NACL. This allows all traffic to flow into and out of the network. Going back and building a working security policy framework will be difficult and complicated – especially if adjustments are still being made to your subnet-level architecture. Taking time to create custom NACLs and assign them to the appropriate subnets as you go will make it much easier to keep track of changes to your security posture as you modify your VPC moving forward. Implement a two-tiered system where NACLs and security groups complement one another Security groups and NACLs are designed to complement one another, yet not every AWS VPC user configures their security policies accordingly. Mapping out your assets can help you identify exactly what kind of rules need to be put in place, and may help you determine which tool is the best one for each particular case. For example, imagine you have a two-tiered web application with web servers in one security group and a database in another. You could establish inbound NACL rules that allow external connections to your web servers from anywhere in the world (enabling port 443 connections) while strictly limiting access to your database (by only allowing port 3306 connections for MySQL). Look out for ineffective, redundant, and misconfigured deny rules Amazon recommends placing deny rules first in the sequential list of rules that your NACL enforces. Since you’re likely to enforce multiple deny rules per NACL (and multiple NACLs throughout your VPC), you’ll want to pay close attention to the order of those rules, looking for conflicts and misconfigurations that will impact your security posture. Similarly, you should pay close attention to the way security group rules interact with your NACLs. Even misconfigurations that are harmless from a security perspective may end up impacting the performance of your instance, or causing other problems. Regularly reviewing your rules is a good way to prevent these mistakes from occurring. Limit outbound traffic to the required ports or port ranges When creating a new NACL, you have the ability to apply inbound or outbound restrictions. There may be cases where you want to set outbound rules that allow traffic from all ports. Be careful, though. This may introduce vulnerabilities into your security posture. It’s better to limit access to the required ports, or to specify the corresponding port range for outbound rules. This establishes the principle of least privilege to outbound traffic and limits the risk of unauthorized access that may occur at the subnet level. Test your security posture frequently and verify the results How do you know if your particular combination of security groups and NACLs is optimal? Testing your architecture is a vital step towards making sure you haven’t left out any glaring vulnerabilities. It also gives you a good opportunity to address misconfiguration risks. This doesn’t always mean actively running penetration tests with experienced red team consultants, although that’s a valuable way to ensure best-in-class security. It also means taking time to validate your rules by running small tests with an external device. Consider using AWS flow logs to trace the way your rules direct traffic and using that data to improve your work. How to diagnose security group rules and NACL rules with flow logs Flow logs allow you to verify whether your firewall rules follow security best practices effectively. You can follow data ingress and egress and observe how data interacts with your AWS security rule architecture at each step along the way. This gives you clear visibility into how efficient your route tables are, and may help you configure your internet gateways for optimal performance. Before you can use the Flow Log CLI, you will need to create an IAM role that includes a policy granting users the permission to create, configure, and delete flow logs. Flow logs are available at three distinct levels, each accessible through its own console: Network interfaces VPCs Subnets You can use the ping command from an external device to test the way your instance’s security group and NACLs interact. Your security group rules (which are stateful) will allow the response ping from your instance to go through. Your NACL rules (which are stateless) will not allow the outbound ping response to travel back to your device. You can look for this activity through a flow log query. Here is a quick tutorial on how to create a flow log query to check your AWS security policies. First you ’ll need to create a flow log in the AWS CLI. This is an example of a flow log query that captures all rejected traffic for a specified network interface. It delivers the flow logs to a CloudWatch log group with permissions specified in the IAM role: aws ec2 create-flow-logs \ –resource-type NetworkInterface \ –resource-ids eni-1235b8ca123456789 \ –traffic-type ALL \ –log-group-name my-flow-logs \ –deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs Assuming your test pings represent the only traffic flowing between your external device and EC2 instance, you’ll get two records that look like this: 2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK 2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK To parse this data, you’ll need to familiarize yourself with flow log syntax. Default flow log records contain 14 arguments, although you can also expand custom queries to return more than double that number: Version tells you the version currently in use. Default flow logs requests use Version 2. Expanded custom requests may use Version 3 or 4. Account-id tells you the account ID of the owner of the network interface that traffic is traveling through. The record may display as unknown if the network interface is part of an AWS service like a Network Load Balancer. Interface-id shows the unique ID of the network interface for the traffic currently under inspection. Srcaddr shows the source of incoming traffic, or the address of the network interface for outgoing traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Dstaddr shows the destination of outgoing traffic, or the address of the network interface for incoming traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Srcport is the source port for the traffic under inspection. Dstport is the destination port for the traffic under inspection. Protocol refers to the corresponding IANA traffic protocol number . Packets describes the number of packets transferred. Bytes describes the number of bytes transferred. Start shows the start time when the first data packet was received. This could be up to one minute after the network interface transmitted or received the packet. End shows the time when the last data packet was received. This can be up to one minutes after the network interface transmitted or received the data packet. Action describes what happened to the traffic under inspection: ACCEPT means that traffic was allowed to pass. REJECT means the traffic was blocked, typically by security groups or NACLs. Log-status confirms the status of the flow log: OK means data is logging normally. NODATA means no network traffic to or from the network interface was detected during the specified interval. SKIPDATA means some flow log records are missing, usually due to internal capacity restraints or other errors. Going back to the example above, the flow log output shows that a user sent a command from a device with the IP address 203.0.113.12 to the network interface’s private IP address, which is 172.31.16.139. The security group’s inbound rules allowed the ICMP traffic to travel through, producing an ACCEPT record. However, the NACL did not let the ping response go through, because it is stateless. This generated the REJECT record that followed immediately after. If you configure your NACL to permit output ICMP traffic and run this test again, the second flow log record will change to ACCEPT. azon Web Services (AWS) is one of the most popular options for organizations looking to migrate their business applications to the cloud. It’s easy to see why: AWS offers high capacity, scalable and cost-effective storage, and a flexible, shared responsibility approach to security. Essentially, AWS secures the infrastructure, and you secure whatever you run on that infrastructure. However, this model does throw up some challenges. What exactly do you have control over? How can you customize your AWS infrastructure so that it isn’t just secure today, but will continue delivering robust, easily managed security in the future? The basics: security groups AWS offers virtual firewalls to organizations, for filtering traffic that crosses their cloud network segments. The AWS firewalls are managed using a concept called Security Groups. These are the policies, or lists of security rules, applied to an instance – a virtualized computer in the AWS estate. AWS Security Groups are not identical to traditional firewalls, and they have some unique characteristics and functionality that you should be aware of, and we’ve discussed them in detail in video lesson 1: the fundamentals of AWS Security Groups , but the crucial points to be aware of are as follows. First, security groups do not deny traffic – that is, all the rules in security groups are positive, and allow traffic. Second, while security group rules can be set to specify a traffic source, or a destination, they cannot specify both on the same rule. This is because AWS always sets the unspecified side (source or destination) as the instance to which the group is applied. Finally, single security groups can be applied to multiple instances, or multiple security groups can be applied to a single instance: AWS is very flexible. This flexibility is one of the unique benefits of AWS, allowing organizations to build bespoke security policies across different functions and even operating systems, mixing and matching them to suit their needs. Adding Network ACLs into the mix To further enhance and enrich its security filtering capabilities AWS also offers a feature called Network Access Control Lists (NACLs). Like security groups, each NACL is a list of rules, but there are two important differences between NACLs and security groups. The first difference is that NACLs are not directly tied to instances, but are tied with the subnet within your AWS virtual private cloud that contains the relevant instance. This means that the rules in a NACL apply to all of the instances within the subnet, in addition to all the rules from the security groups. So a specific instance inherits all the rules from the security groups associated with it, plus the rules associated with a NACL which is optionally associated with a subnet containing that instance. As a result NACLs have a broader reach, and affect more instances than a security group does. The second difference is that NACLs can be written to include an explicit action, so you can write ‘deny’ rules – for example to block traffic from a particular set of IP addresses which are known to be compromised. The ability to write ‘deny’ actions is a crucial part of NACL functionality. It’s all about the order As a consequence, when you have the ability to write both ‘allow’ rules and ‘deny’ rules, the order of the rules now becomes important. If you switch the order of the rules between a ‘deny’ and ‘allow’ rule, then you’re potentially changing your filtering policy quite dramatically. To manage this, AWS uses the concept of a ‘rule number’ within each NACL. By specifying the rule number, you can identify the correct order of the rules for your needs. You can choose which traffic you deny at the outset, and which you then actively allow. As such, with NACLs you can manage security tasks in a way that you cannot do with security groups alone. However, we did point out earlier that an instance inherits security rules from both the security groups, and from the NACLs – so how do these interact? The order by which rules are evaluated is this; For inbound traffic, AWS’s infrastructure first assesses the NACL rules. If traffic gets through the NACL, then all the security groups that are associated with that specific instance are evaluated, and the order in which this happens within and among the security groups is unimportant because they are all ‘allow’ rules. For outbound traffic, this order is reversed: the traffic is first evaluated against the security groups, and then finally against the NACL that is associated with the relevant subnet. You can see me explain this topic in person in my new whiteboard video: Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Building a Blueprint for a Successful Micro-segmentation Implementation
Avishai Wool, CTO and co-founder of AlgoSec, looks at how organizations can implement and manage SDN-enabled micro-segmentation... Micro-segmentation Building a Blueprint for a Successful Micro-segmentation Implementation Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 6/22/20 Published Avishai Wool, CTO and co-founder of AlgoSec, looks at how organizations can implement and manage SDN-enabled micro-segmentation strategies Micro-segmentation is regarded as one of the most effective methods to reduce an organization’s attack surface, and a lack of it has often been cited as a contributing factor in some of the largest data breaches and ransomware attacks. One of the key reasons why enterprises have been slow to embrace it is because it can be complex and costly to implement – especially in traditional on-premise networks and data centers. In these, creating internal zones usually means installing extra firewalls, changing routing, and even adding cabling to police the traffic flows between zones, and having to manage the additional filtering policies manually. However, as many organizations are moving to virtualized data centers using Software-Defined Networking (SDN), some of these cost and complexity barriers are lifted. In SDN-based data centers the networking fabric has built-in filtering capabilities, making internal network segmentation much more accessible without having to add new hardware. SDN’s flexibility enables advanced, granular zoning: In principle, data center networks can be divided into hundreds, or even thousands, of microsegments. This offers levels of security that would previously have been impossible – or at least prohibitively expensive – to implement in traditional data centers. However, capitalizing on the potential of micro-segmentation in virtualized data centers does not eliminate all the challenges. It still requires the organization to deploy a filtering policy that the micro-segmented fabric will enforce, and writing this a policy is the first, and largest, hurdle that must be cleared. The requirements from a micro-segmentation policy A correct micro-segmentation filtering policy has three high-level requirements: It allows all business traffic – The last thing you want is to write a micro-segmented policy and have it block necessary business communication, causing applications to stop functioning. It allows nothing else – By default, all other traffic should be denied. It is future-proof – ‘More of the same’ changes in the network environment shouldn’t break rules. If you write your policies too narrowly, when something in the network changes, such as a new server or application, something will stop working. Write with scalability in mind. A micro-segmentation blueprint Now that you know what you are aiming for, how can you actually achieve it? First of all, your organization needs to know what your traffic flows are – what is the traffic that should be allowed. To get this information, you can perform a ‘discovery’ process. Only once you have this information, can you then establish where to place the borders between the microsegments in the data center and how to devise and manage the security policies for each of the segments in their network environment. I welcome you to download AlgoSec’s new eBook , where we explain in detail how to implement and manage micro-segmentation. AlgoSec Enables Micro-segmentation The AlgoSec Security Management Suite (ASMS) employs the power of automation to make it easy to define and enforce your micro-segmentation strategy inside the data center, ensure that it does not block critical business services, and meet compliance requirements. AlgoSec supports micro-segmentation by: Providing application discovery based on netflow information Identifying unprotected network flows that do not cross any firewall and are not filtered for an application Automatically identifying changes that will violate the micro-segmentation strategy Automatically implementing network security changes Automatically validating changes The bottom line is that implementing an effective network micro-segmentation strategy is now possible. It requires careful planning and implementation, but when carried out following a proper blueprint and with the automation capabilities of the AlgoSec Security Management Suite, it provides you with stronger security without sacrificing any business agility. Find out more about how micro-segmentation can help you boost your security posture, or request your personal demo . Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Stop hackers from poisoning the well: Protecting critical infrastructure against cyber-attacks
Attacks on water treatment plants show just how vulnerable critical infrastructure is to hacking – here’s how these vital services should... Cyber Attacks & Incident Response Stop hackers from poisoning the well: Protecting critical infrastructure against cyber-attacks Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 3/31/21 Published Attacks on water treatment plants show just how vulnerable critical infrastructure is to hacking – here’s how these vital services should be protected. Criminals plotting to poison a city’s water supply is a recurring theme in TV and movie thrillers, such as 2005’s Batman Begins. But as we’ve seen recently, it’s more than just a plot device: it’s a cyber-threat which is all too real. During the past 12 months, there have been two high-profile attacks on water treatment systems that serve local populations, both with the aim of causing harm to citizens. The first was in April 2020, targeting a plant in Israel . Intelligence sources said that hackers gained access to the plant and tried altering the chlorine levels in drinking water – but luckily the attack was detected and stopped. And in early February, a hacker gained access to the water system of Oldsmar, Florida and tried to pump in a dangerous amount of sodium hydroxide. The hacker succeeded in starting to add the chemical, but luckily a worker spotted what was happening and reversed the action. But what could have happened if those timely interventions had not been made? These incidents are a clear reminder that critical national infrastructure is vulnerable to attacks – and that those attacks will keep on happening, with the potential to impact the lives of millions of people. And of course, the Covid-19 pandemic has further highlighted how essential critical infrastructure is to our daily lives. So how can better security be built into critical infrastructure systems, to stop attackers being able to breach them and disrupt day-to-day operations? It’s a huge challenge, because of the variety and complexity of the networks and systems in use across different industry sectors worldwide. Different systems but common security problems For example, in water and power utilities, there are large numbers of cyber-physical systems consisting of industrial equipment such as turbines, pumps and switches, which in turn are managed by a range of different industrial control systems (ICS). These were not designed with security in mind: they are simply machines with computerized controllers that enact the instructions they receive from operators. The communications between the operator and the controllers are done via IP-based networks – which, without proper network defenses, means they can be accessed over the Internet – which is the vector that hackers exploit. As such, irrespective of the differences between ICS controls, the security challenges for all critical infrastructure organizations are similar: hackers must be stopped from being able to infiltrate networks; if they do succeed in breaching the organization’s defenses, they must be prevented from being able to move laterally across networks and gain access to critical systems. This means network segmentation is one of the core strategies for securing critical infrastructure, to keep operational systems separate from other networks in the organization and from the public Internet and surround them with security gateways so that they cannot be accessed by unauthorized people. In the attack examples we mentioned earlier, properly implemented segmentation would prevent a hacker from being able to access the PC which controls the water plant’s pumps and valves. With damaging ransomware attacks increasing over the past year, which also exploit internal network connections and pathways to spread rapidly and cause maximum disruption, organizations should also employ security best-practices to block or limit the impact of ransomware attacks on their critical systems. These best practices have not changed significantly since 2017’s massive WannaCry and NotPetya attacks, so organizations would be wise to check and ensure they are employing them on their own networks. Protecting critical infrastructure against cyber-attacks is a complex challenge because of the sheer diversity of systems in each sector. However, the established security measures we’ve outlined here are extremely effective in protecting these vital systems – and in turn, protecting all of us. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Router Honeypot for an IRC Bot
In our previous post we have provided some details about a new fork of Kinsing malware, a Linux malware that propagates across... Cloud Security Router Honeypot for an IRC Bot Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. glibc_2 Tags Share this article 9/13/20 Published In our previous post we have provided some details about a new fork of Kinsing malware, a Linux malware that propagates across misconfigured Docker platforms and compromises them with a coinminer. Several days ago, the attackers behind this malware have uploaded a new ELF executable b_armv7l into the compromised server dockerupdate[.]anondns[.]net . The executable b_armv7l is based on a known source of Tsunami (also known as Kaiten), and is built using uClibc toolchain: $ file b_armv7l b_armv7l: ELF 32-bit LSB executable, ARM, EABI4 version 1 (SYSV), dynamically linked, interpreter /lib/ld-uClibc.so.0, with debug_info, not stripped Unlike glibc , the C library normally used with Linux distributions, uClibc is smaller and is designed for embedded Linux systems, such as IoT. Therefore, the malicious b_armv7l was built with a clear intention to install it on such devices as routers, firewalls, gateways, network cameras, NAS servers, etc. Some of the binary’s strings are encrypted. With the help of the HexRays decompiler , one could clearly see how they are decrypted: memcpy ( &key, "xm@_;w,B-Z*j?nvE|sq1o$3\"7zKC4ihgfe6cba~&5Dk2d!8+9Uy:" , 0x40u ) ; memcpy ( &alphabet, "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ. " , 0x40u ) ; for ( i = 0; i < = 64; ++i ){ if ( encoded [ j ] == key [ i ]) { if ( psw_or_srv ) decodedpsw [ k ] = alphabet [ i ] ; else decodedsrv [ k ] = alphabet [ i ] ; ++k; }} The string decryption routine is trivial — it simply replaces each encrypted string’s character found in the array key with a character at the same position, located in the array alphabet. Using this trick, the critical strings can be decrypted as: Variable Name Encoded String Decoded String decodedpsw $7|3vfaa~8 logmeINNOW decodedsrv $7?*$s7
- AlgoSec | Bridging the DevSecOps Application Connectivity Disconnect via IaC
Anat Kleinmann, AlgoSec Sr. Product Manager and IaC expert, discusses how incorporating Infrastructure-as-Code into DevSecOps can allow... Risk Management and Vulnerabilities Bridging the DevSecOps Application Connectivity Disconnect via IaC Anat Kleinmann 2 min read Anat Kleinmann Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/7/22 Published Anat Kleinmann, AlgoSec Sr. Product Manager and IaC expert, discusses how incorporating Infrastructure-as-Code into DevSecOps can allow teams to take a preventive approach to secure application connectivity . With customer demands changing at breakneck speed, organizations need to be agile to win in their digital markets. This requires fast and frequent application deployments, forcing DevOps teams to streamline their software development processes. However, without the right security tools placed in the early phase of the CI/CD pipeline, these processes can be counterproductive leading to costly human errors and prolonged application deployment backups. This is why organizations need to find the right preventive security approach and explore achieving this through Infrastructure-as-Code. Understanding Infrastructure as Code – what does it actually mean? Infrastructure-as-Code (Iac) is a software development method that describes the complete environment in which the software runs. It contains information about the hardware, networks, and software that are needed to run the application. IAC is also referred to as declarative provisioning or automated provisioning. In other words, IAC enables security teams to create an automated and repeatable process to build out an entire environment. This is helpful for eliminating human errors that can be associated with manual configuration. The purpose of IaC is to enable developers or operations teams to automatically manage, monitor and provision resources, rather than manually configure discrete hardware devices and operating systems. What does IaC mean in the context of running applications in a cloud environment When using IaC, network configuration files can contain your applications connectivity infrastructure connectivity specifications changes, which mkes it easier to edit, review and distribute. It also ensures that you provision the same environment every time and minimizes the downtime that can occur due to security breaches. Using Infrastructure as code (IaC) helps you to avoid undocumented, ad-hoc configuration changes and allows you to enforce security policies in advance before making the changes in your network. Top 5 challenges when not embracing a preventive security approach Counterintuitive communication channel – When reviewing the code manually, DevOps needs to provide access to a security manager to review it and rely on the security manager for feedback. This can create a lot of unnecessary back and forth communication between the teams which can be a highly counterintuitive process. Mismanagement of DevOps resources – Developers need to work on multiple platforms due to the nature of their work. This may include developing the code in one platform, checking the code in another, testing the code in a third platform and reviewing requests in a fourth platform. When this happens, developers often will not be alerted of any network risk or non-compliance issue as defined by the organization. Mismanagement of SecOps resources – At the same time, network security managers are also bombarded with security review requests and tasks. Yet, they are expected to be agile, which is impossible in case of manual risk detection. Inefficient workflow – Sometimes risk analysis process is skipped and only reviewed at the end of the CI/CD pipeline, which prolongs the delivery of the application. Time consuming review process – The risk analysis review itself can sometimes take more than 30 minutes long which can create unnecessary and costly bottlenecking, leading to missed rollout deadlines of critical applications Why it’s important to place security early in the development cycle Infrastructure-as-code (IaC) is a crucial part of DevSecOps practices. The current trend is based on the principle of shift-left, which places security early in the development cycle. This allows organizations to take a proactive, preventive approach rather than a reactive one. This approach solves the problem of developers leaving security checks and testing for the later stages of a project often as it nears completion and deployment. It is critical to take a proactive approach since late-stage security checks lead to two critical problems. Security flaws can go undetected and make it into the released software, and security issues detected at the end of the software development lifecycle demand considerably more time, resources and money to remediate than those identified early on. The Power of IaC Connectivity Risk Analysis and Key Benefits IaC connectivity risk analysis provides automatic and proactive connectivity risk analysis, enabling a frictionless workflow for DevOps with continuous customized risk analysis and remediation managed and controlled by the security managers. IaC Connectivity Risk Analysis enables organizations to use a single source of truth for managing the lifecycle of their applications. Furthermore, security engineers can use IaC to automate the design, deployment, and management of virtual assets across a hybrid cloud environment. With automated security tests, engineers can also continuously test their infrastructure for security issues early in the development phase. Key benefits Deliver business applications into production faster and more securely Enable a frictionless workflow with continuous risk analysis and remediation Reduce connectivity risks earlier in the CI/CD process Customizable risk policy to surface only the most critical risks The Takeaway Don’t get bogged down by security and compliance. When taking a preventive approach using a connectivity risk analysis via IaC, you can increase the speed of deployment, reduce misconfiguration and compliance errors, improve DevOps – SecOps relationship and lower costs Next Steps Let AlgoSec’s IaC Connectivity Risk Analysis can help you take a proactive, preventive security approach to get DevOps’ workflow early in the game, automatically identifying connectivity risks and providing ways to remediate them. Watch this video or visit us at GitHub to learn how. Schedule a demo Related Articles Q1 at AlgoSec: What innovations and milestones defined our start to 2026? AlgoSec Reviews Mar 19, 2023 · 2 min read 2025 in review: What innovations and milestones defined AlgoSec’s transformative year in 2025? AlgoSec Reviews Mar 19, 2023 · 2 min read Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call











