top of page

Search results

700 results found with an empty search

  • Worldline | AlgoSec

    Explore Algosec's customer success stories to see how organizations worldwide improve security, compliance, and efficiency with our solutions. WORLDLINE AUTOMATES SECURITY POLICY MANAGEMENT AND IMPROVES VISIBILITY OF NETWORK SECURITY DEVICE CONFIGURATIONS Organization Worldline Industry Financial Services Headquarters Belgium Download case study Share Customer
success stories "With AlgoSec, not only did we improve visibility of our security policy and device configurations, but we were also able to gain tremendous operational savings by automating many of these processes." Background Worldline is the European leader in the payments and transactional services industry. Worldline delivers new-generation services, enabling its customers to offer smooth and innovative solutions to the end consumer. A key actor for B2B2C industries, with over 40 years of experience, Worldline supports and contributes to the success of all businesses and administrative services in a perpetually evolving market. Worldline offers a unique and flexible business model built around a global and growing portfolio, thus enabling end-to-end support. Challenge Worldline’s network is secured with more than 20 firewalls and routers from vendors such as Check Point and Cisco. Even with over 30 employees in the security and networking group, the company was spending a lot of time manually performing security management tasks such as monitoring and tracking security policy changes, conducting risk analysis, validating network schemas, and preparing for PCI-DSS and SAS70 audits. Additionally, while Worldline had a documented process for implementing firewall changes, there was little visibility into what was actually occurring, and enforcing the process was not trivial. “Manually trying to maintain control of our firewall and router policies was complex because we lacked the proper visibility of the firewall configurations and all of the changes that were occurring,” said Massoud Kamran, Senior Security Consultant at Worldline. Solution Worldline selected the AlgoSec Security Management solution to automate security policy operations, streamline audit preparation and validate security changes that were being processed. “We chose AlgoSec over other options because the solution leverages the routing information and the topology of firewalls to give us the most reliable visibility into what’s going on with network traffic and the security policy,” said Kamran. Results AlgoSec provides Worldline with an intelligent solution that enables Kamran and his team to find and asses risky rules and easily clean up their rule bases. Inaddition, Worldline leverages information from AlgoSec’s reports to enhance their Security Information and Event Management solution.AlgoSec’s comprehensive reporting gives Worldline continuous visibility into the firewall change process as well as provides evidence for PCI audits. “WithAlgoSec, we’ve improved our visibility into the current infrastructure and reduced the time spent on compliance audits, configuration management and change monitoring,” said Kamran. “In particular, time spent preparing evidence for PCI and SAS70 audits has been cut significantly. Assuming on average we make 700 security policy changes per year, we now save many man hours just by following AlgoSec’s change process and ensuring that the changes don’t introduce any risk,” concluded Kamran. Schedule time with one of our experts

  • AlgoSec AutoDiscovery DS - AlgoSec

    AlgoSec AutoDiscovery DS Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • Zero trust container analysis system - AlgoSec

    Zero trust container analysis system Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • IT Central Station and CSO PeerPaper Report - AlgoSec

    IT Central Station and CSO PeerPaper Report Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec | Sunburst Backdoor, Part III: DGA & Security Software

    In the previous parts of our blog ( part I and part II ), we have described the most important parts of the Sunburst backdoor... Cloud Security Sunburst Backdoor, Part III: DGA & Security Software Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/22/20 Published In the previous parts of our blog ( part I and part II ), we have described the most important parts of the Sunburst backdoor functionality and its Domain Generation Algorithm (DGA). This time, let’s have a deeper look into the passive DNS requests reported by Open-Source Context and Zetalytics . The valid DNS requests generated by the malware fall into 2 groups: DNS requests that encode a local domain name DNS requests that encode data The first type of DNS requests allows splitting long domain names into separate requests. These requests are generated by the malware’s functions GetPreviousString() and GetCurrentString() . In general, the format of a DNS request that encodes a domain name may look like: USER_ID.NUM.COMPUTER_DOMAIN[.]appsync-api.us-west-2[.]avsvmcloud[.]com where: USER_ID is an 8-byte user ID that uniquely identifies a compromised host, encoded as a 15-character string NUM is a number of a domain name – either 0 or 1, encoded as a character COMPUTER_DOMAIN is an encoded local computer domain Let’s try decoding the following 3 DNS requests: olc62cocacn7u2q22v02eu.appsync-api.us-west-2.avsvmcloud.com r1qshoj05ji05ac6eoip02jovt6i2v0c.appsync-api.us-west-2.avsvmcloud.com lt5ai41qh5d53qoti3mkmc0.appsync-api.us-west-2.avsvmcloud.com String 1 Let’s start from the 1st string in the list: olc62cocacn7u2q22v02eu.appsync-api.us-west-2.avsvmcloud.com. In this string, the first 15-character string is an encoded USER_ID : “olc62cocacn7u2q” . Once it is base-64 decoded, as explained in the previous post, it becomes a 9-byte byte array: 86 7f 2f be f9 fb a3 ae c4 The first byte in this byte array is a XOR key: 0x86 . Once applied to the 8 bytes that follow it, we get the 8-byte user ID – let’s take a note and write it down, we will need it later: f9 a9 38 7f 7d 25 28 42 Next, let’s take the NUM part of the encoded domain: it’s a character “2” located at the position #15 (starting from 0) of the encrypted domain. In order to decode the NUM number, we have to take the first character of the encrypted domain, take the reminder of its division by 36 , and subtract the NUM ‘s position in the string “0123456789abcdefghijklmnopqrstuvwxyz” : num = domain[0] % 36 – “0123456789abcdefghijklmnopqrstuvwxyz”.IndexOf(domain.Substring(15, 1)); The result is 1 . That means the decrypted domain will be the 2nd part of a full domain name. The first part must have its NUM decoded as 0. The COMPUTER_DOMAIN part of the encrypted domain is “2v02eu” . Once decoded, using the previously explained method, the decoded computer domain name becomes “on.ca” . String 2 Let’s decode the second passive DNS request from our list: r1qshoj05ji05ac6eoip02jovt6i2v0c.appsync-api.us-west-2.avsvmcloud.com Just as before, the decoded 8-byte user ID becomes: f9 a9 38 7f 7d 25 28 42 The NUM part of the encoded domain, located at the position #15 (starting from 0), is a character “6” . Let’s decode it, by taking the first character ( “r” = 114 ), take the reminder of its division by 36 ( 114 % 36 = 6 ), and subtracting the position of the character “6” in the “0123456789abcdefghijklmnopqrstuvwxyz” , which is 6 . The result is 0 . That means the decrypted domain will be the 1st part of the full domain name. The COMPUTER_DOMAIN part of the encrypted domain is “eoip02jovt6i2v0c” . Once decoded, it becomes “city.kingston.” Next, we need to match 2 decrypted domains by the user ID, which is f9 a9 38 7f 7d 25 28 42 in both cases, and concatenate the first and the second parts of the domain. The result will be “city.kingston.on.ca” . String 3 Here comes the most interesting part. Lets try to decrypt the string #3 from our list of passive DNS requests: lt5ai41qh5d53qoti3mkmc0.appsync-api.us-west-2.avsvmcloud.com The decoded user ID is not relevant, as the decoded NUM part is a number -29 . It’s neither 0 nor 1 , so what kind of domain name that is? If we ignore the NUM part and decode the domain name, using the old method, we will get “thx8xb” , which does not look like a valid domain name. Cases like that are not the noise, and are not some artificially encrypted artifacts that showed up among the DNS requests. This is a different type of DNS requests. Instead of encoding local domain names, these types of requests contain data. They are generated by the malware’s function GetNextStringEx() . The encryption method is different as well. Let’s decrypt this request. First, we can decode the encrypted domain, using the same base-64 method, as before . The string will be decoded into 14 bytes: 7c a5 4d 64 9b 21 c1 74 a6 59 e4 5c 7c 7f Let’s decode these bytes, starting from the 2nd byte, and using the first byte as a XOR key. We will get: 7c d9 31 18 e7 5d bd 08 da 25 98 20 00 03 In this array, the bytes marked in yellow are an 8-byte User ID, encoded with a XOR key that is selected from 2 bytes marked in red. Let’s decode User ID: for ( int i = 0 ; i < 8 ; i++) { bytes[i + 1 ] ^= bytes[ 11 - i % 2 ]; } The decoded byte array becomes: 7c f9 a9 38 7f 7d 25 28 42 25 98 20 00 03 The User ID part in marked in yellow. Does it look familiar? Indeed, it’s the same User ID we’ve seen before, when we decoded “city.kingston.on.ca” . The next 3 bytes marked in red are: 25 98 20 . 2 0x59820 The first number 2 stands for the size of data that follows – this data is 00 03 (selected in green). The number 0x59820 , or 366,624 in decimal, is a timestamp. It’s a number of 4-second periods of time since 1 January 2010. To obtain the real time stamp, we need to multiple it by 15 to get minutes, then add those minutes to 1 January 2010: var date = ( new DateTime( 2010 , 1 , 1 , 0 , 0 , 0 , DateTimeKind.Utc)).AddMinutes(timestamp * 15 ); For the number 0x59820 , the time stamp becomes 16 July 2020 12:00:00 AM – that’s the day when the DNS request was made. The remaining 2 bytes, 00 03 , encrypt the state of 8 security products, to indicate whether each one of them is running or whether it is stopped. The 8 security products are: Windows Live OneCare / Windows Defender Windows Defender Advanced Threat Protection Microsoft Defender for Identity Carbon Black CrowdStrike FireEye ESET F-Secure 2 states for 8 products require 2 * 8 = 16 bits = 2 bytes. The 2 bytes 00 03 in binary form are: 00 00 00 00 00 00 00 11 Here, the least-significant bits 11 identify that the first product in the list, Windows Live OneCare / Windows Defender, is reported as ‘running’ ( 1 ) and as ‘stopped’ ( 1 ). Now we know that apart from the local domain, the trojanised SolarWinds software running on the same compromised host on “city.kingston.on.ca” domain has also reported the status of the Windows Defender software. What Does it Mean? As explained in the first part of our description, the malware is capable of stopping the services of security products, be manipulating registry service keys under Administrator account. It’s likely that the attackers are using DNS queries as a C2 channel to first understand what security products are present. Next, the same channel is used to instruct the malware to stop/deactivate these services, before the 2nd stage payload, TearDrop Backdoor, is deployed. Armed with this knowledge, let’s decode other passive DNS requests, printing the cases when the compromised host reports a running security software. NOTES: As a private case, if the data size field is 0 or 1 , the timestamp field is not followed with any data. Such type of DNS request is generated by the malware’s function GetNextString() . It is called ‘a ping’ in the listing below. If the first part of the domain name is missing, the recovered domain name is pre-pended with ‘*’ . The malware takes the time difference in minutes, then divides it by 30 and then converts the result from double type to int type; as a result of such conversion, the time stamps are truncated to the earliest half hour. 2D82B037C060515C SFBALLET Data: Windows Live OneCare / Windows Defender [running] 11/07/2020 12:00:00 AM Pings: 12/07/2020 12:30:00 AM 70DEE5C062CFEE53 ccscurriculum.c Data: ESET [running] 17/04/2020 4:00:00 PM Pings: 20/04/2020 5:00:00 PM AB902A323B541775 mountsinai.hospital Pings: 4/07/2020 12:30:00 AM 9ACC3A3067DC7FD5 *ripta.com Data: ESET [running] 12/09/2020 6:30:00 AM Pings: 13/09/2020 7:30:00 AM 14/09/2020 9:00:00 AM CB34C4EBCB12AF88 DPCITY.I7a Data: ESET [running] 26/06/2020 5:00:00 PM Pings: 27/06/2020 6:30:00 PM 28/06/2020 7:30:00 PM 29/06/2020 8:30:00 PM 29/06/2020 8:30:00 PM E5FAFE265E86088E *scroot.com Data: CrowdStrike [running] 25/07/2020 2:00:00 PM Pings: 26/07/2020 2:30:00 PM 26/07/2020 2:30:00 PM 27/07/2020 3:00:00 PM 27/07/2020 3:00:00 PM 426030B2ED480DED *kcpl.com Data: Windows Live OneCare / Windows Defender [running] 8/07/2020 12:00:00 AM Carbon Black [running] 8/07/2020 12:00:00 AM Full list of decoded pDNS requests can be found here . An example of a working implementation is available at this repo. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • Enhancing Zero Trust WP - AlgoSec

    Enhancing Zero Trust WP Download PDF Schedule time with one of our experts Schedule time with one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Continue

  • AlgoSec launches its AI-powered Security Platform, to securely manage application-centric connectivity and remediate risk in real time

    The new release deploys advanced AI for fast and accurate application discovery, provides clear visualization and mapping of application connectivity and potential security risks in complex hybrid environments AlgoSec launches its AI-powered Security Platform, to securely manage application-centric connectivity and remediate risk in real time The new release deploys advanced AI for fast and accurate application discovery, provides clear visualization and mapping of application connectivity and potential security risks in complex hybrid environments September 25, 2024 Speak to one of our experts RIDGEFIELD PARK, NJ, September 25, 2024 – Global cybersecurity leader AlgoSec has launched its newest Security Management platform version, featuring advanced artificial intelligence (AI) technology that provides an application-centric security approach and a clearer picture of risks and their impact. With this new release, the AlgoSec platform enables users to accurately identify the business applications running in their complex hybrid network, and leverage intelligent change automation to streamline security change processes, thus improving security and agility. “Security professionals are overwhelmed with a barrage of alerts that provide no context between critical threats and minor issues,” said Eran Shiff , VP Product of AlgoSec. “By mapping applications, security teams can understand their criticality, automate changes and prioritize alerts that truly matter, saving countless hours through automation.” Gartner predicts that by 2027, 50 percent of critical enterprise applications will reside outside of centralized public cloud locations, underscoring the complexity that network infrastructures face. Today’s networks are 100 times more complex than they were 10 years ago, and the pace of deployment and development at which security teams are expected to work is 100 times faster. AI-powered application discovery enhances a security team’s ability to detect and respond to threats in real-time. An application-centric approach automates change management processes, identifies security risks and mitigates risks before they impact the network infrastructure. “In today’s evolving cyber landscape, it’s essential that we rapidly identify and prioritize threats as they occur,” said Robert Eldridge, Security Solutions Director of Natilik. “AlgoSec’s AI-powered platform helps us deliver proactive network visibility and risk mitigation to our clients, keeping them ahead of potential threats”. Securing hybrid infrastructures relies on four pillars that are essential to AlgoSec’s platform update: ● AI-driven application discovery – Advanced AI feature designed to automatically discover and identify the business applications that are running by correlating them to security changes that have been made. ● Intelligent and automated application connectivity change – New enhancements allow security professionals to directly adjust existing Microsoft Azure firewall rules for new application connections. Additionally, there’s added support for application awareness in Check Point R80+ firewalls. ● Reduce risk exposure and minimize attack surface – New features focus on tightening security posture and minimizing potential vulnerabilities. It streamlines Microsoft Azure Firewall rule management by identifying and recommending the removal of unused rules. It reduces risk exposure by automatically generating change management tickets to eliminate overly permissive rules. Additionally, it ensures compliance with the latest ASD-ISM regulations. ● Better visibility across complex hybrid networks – AlgoSec has enriched its capabilities to support visibility of network security devices including: NSX-T Gateway Firewall, Azure Load Balancer, and Google Cloud map and traffic path (in early availability). To learn more about updates to the AlgoSec Security Management platform, click here . AlgoSec will demonstrate the key capabilities of release A33 during its upcoming annual AlgoSummit user event. To register, click here . About AlgoSec AlgoSec, a global cybersecurity leader, empowers organizations to secure application connectivity and cloud-native applications throughout their multi-cloud and hybrid network. Trusted by more than 1,800 of the world’s leading organizations, AlgoSec’s application-centric approach enables secure acceleration of business application deployment by centrally managing application connectivity and security policies across the public clouds, private clouds, containers, and on-premises networks. Using its unique vendor-agnostic deep algorithm for intelligent change management automation, AlgoSec enables the acceleration of digital transformation projects, helps prevent business application downtime, and substantially reduces manual work and exposure to security risks. AlgoSec’s policy management and CNAPP platforms provide a single source for visibility into security and compliance issues within cloud-native applications as well as across the hybrid network environment, to ensure ongoing adherence to internet security standards, industry, and internal regulations. Learn how AlgoSec enables application owners, information security experts, DevSecOps, and cloud security teams to deploy business applications up to 10 times faster while maintaining security at https://www.algosec.com .  MEDIA CONTACT: Megan Davis Alloy, on behalf of AlgoSec [email protected]

  • AlgoSec | How to optimize the security policy management lifecycle

    Information security is vital to business continuity. Organizations trust their IT teams to enable innovation and business transformation... Risk Management and Vulnerabilities How to optimize the security policy management lifecycle Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/9/23 Published Information security is vital to business continuity. Organizations trust their IT teams to enable innovation and business transformation but need them to safeguard digital assets in the process. This leads some leaders to feel that their information security policies are standing in the way of innovation and business agility. Instead of rolling new a new enterprise application and provisioning it for full connectivity from the start, security teams demand weeks or months of time to secure those systems before they’re ready. But this doesn’t mean that cybersecurity is a bottleneck to business agility. The need for speedier deployment doesn’t automatically translate to increased risk. Organizations that manage application connectivity and network security policies using a structured lifecycle approach can improve security without compromising deployment speed. Many challenges stand between organizations and their application and network connectivity goals. Understanding each stage of the lifecycle approach to security policy change management is key to overcoming these obstacles. Challenges to optimizing security policy management ` Complex enterprise infrastructure and compliance requirements A medium-sizded enterprise may have hundreds of servers, systems, and security solutions like firewalls in place. These may be spread across several different cloud providers, with additional inputs from SaaS vendors and other third-party partners. Add in strict regulatory compliance requirements like HIPAA , and the risk management picture gets much more complicated. Even voluntary frameworks like NIST heavily impact an organization’s information security posture, acceptable use policies, and more – without the added risk of non-compliance. Before organizations can optimize their approach to security policy management, they must have visibility and control over an increasingly complex landscape. Without this, making meaningful progress of data classification and retention policies is difficult, if not impossible. Modern workflows involve non-stop change When information technology teams deploy or modify an application, it’s in response to an identified business need. When those deployments get delayed, there is a real business impact. IT departments now need to implement security measures earlier, faster, and more comprehensively than they used to. They must conduct risk assessments and security training processes within ever-smaller timeframes, or risk exposing the organization to vulnerabilities and security breaches . Strong security policies need thousands of custom rules There is no one-size-fits-all solution for managing access control and data protection at the application level. Different organizations have different security postures and security risk profiles. Compliance requirements can change, leading to new security requirements that demand implementation. Enterprise organizations that handle sensitive data and adhere to strict compliance rules must severely restrict access to information systems. It’s not easy to achieve PCI DSS compliance or adhere to GDPR security standards solely through automation – at least, not without a dedicated change management platform like AlgoSec . Effectively managing an enormous volume of custom security rules and authentication policies requires access to scalable security resources under a centralized, well-managed security program. Organizations must ensure their security teams are equipped to enforce data security policies successfully. Inter-department communication needs improvement Application deliver managers, network architects, security professionals, and compliance managers must all contribute to the delivery of new application projects. Achieving clear channels of communication between these different groups is no easy task. In most enterprise environments, these teams speak different technical languages. They draw their data from internally siloed sources, and rarely share comprehensive documentation with one another. In many cases, one or more of these groups are only brought in after everyone else has had their say, which significantly limits the amount of influence they can have. The lifecycle approach to managing IT security policies can help establish a standardized set of security controls that everyone follows. However, it also requires better communication and security awareness from stakeholders throughout the organization. The policy management lifecycle addresses these challenges in five stages ` Without a clear security policy management lifecycle in place, most enterprises end up managing security changes on an ad hoc basis. This puts them at a disadvantage, especially when security resources are stretched thin on incident response and disaster recovery initiatives. Instead of adopting a reactive approach that delays application releases and reduces productivity, organizations can leverage the lifecycle approach to security policy management to address vulnerabilities early in the application development lifecycle. This leaves additional resources available for responding to security incidents, managing security threats, and proactively preventing data breaches. Discover and visualize application connectivity The first stage of the security policy management lifecycle revolves around mapping how your apps connect to each other and to your network setup. The more details can include in this map, the better prepared your IT team will be for handling the challenges of policy management. Performing this discovery process manually can cost enterprise-level security teams a great deal of time and accuracy. There may be thousands of devices on the network, with a complex web of connections between them. Any errors that enter the framework at this stage will be amplified through the later stages – it’s important to get things right at this stage. Automated tools help IT staff improve the speed and accuracy of the discovery and visualization stage. This helps everyone – technical and nontechnical staff included – to understand what apps need to connect and work together properly. Automated tools help translate these needs into language that the rest of the organization can understand, reducing the risk of misconfiguration down the line. Plan and assess security policy changes Once you have a good understanding of how your apps connect with each other and your network setup, you can plan changes more effectively. You want to make sure these changes will allow the organization’s apps to connect with one another and work together without increasing security risks. It’s important to adopt a vulnerability-oriented perspective at this stage. You don’t want to accidentally introduce weak spots that hackers can exploit, or establish policies that are too complex for your organization’s employees to follow. This process usually involves translating application connectivity requests into network operations terms. Your IT team will have to check if the proposed changes are necessary, and predict what the results of implementing those changes might be. This is especially important for cloud-based apps that may change quickly and unpredictably. At the same time, security teams must evaluate the risks and determine whether the changes are compliant with security policy. Automating these tasks as part of a regular cycle ensures the data is always relevant and saves valuable time. Migrate and deploy changes efficiently The process of deploying new security rules is complex, time-consuming, and prone to error . It often stretches the capabilities of security teams that already have a wide range of operational security issues to address at any given time. In between managing incident response and regulatory compliance, they must now also manually update thousands of security rules over a fleet of complex network assets. This process gets a little bit easier when guided by a comprehensive security policy change management framework. But most organizations don’t unlock the true value of the security policy management lifecycle until they adopt automation. Automated security policy management platforms enable organizations to design rule changes intelligently, migrate rules automatically, and push new policies to firewalls through a zero-touch interface. They can even validate whether the intended changes updated correctly. This final step is especially important. Without it, security teams must manually verify whether their new policies successfully address the vulnerabilities the way they’re supposed to. This doesn’t always happen, leaving security teams with a false sense of security. Maintain configurations using templates Most firewalls accumulate thousands of rules as security teams update them against new threats. Many of these rules become outdated and obsolete over time, but remain in place nonetheless. This adds a great deal of complexity to small-scale tasks like change management, troubleshooting issues, and compliance auditing. It can also impact the performance of firewall hardware , which decreases the overall lifespan of expensive physical equipment. Configuration changes and maintenance should include processes for identifying and eliminating rules that are redundant, misconfigured, or obsolete. The cleaner and better-documented the organization’s rulesets are, the easier subsequent configuration changes will be. Rule templates provide a simple solution to this problem. Organizations that create and maintain comprehensive templates for their current firewall rulesets can easily modify, update, and change those rules without having to painstakingly review and update individual devices manually. Decommission obsolete applications completely Every business application will eventually reach the end of its lifecycle. However, many organizations keep decommissioned security policies in place for one of two reasons: Oversight that stems from unstandardized or poorly documented processes, or; Fear that removing policies will negatively impact other, active applications. As these obsolete security policies pile up, they force the organization to spend more time and resources updating their firewall rulesets. This adds bloat to firewall security processes, and increases the risk of misconfigurations that can lead to cyber attacks. A standardized, lifecycle-centric approach to security policy management makes space for the structured decommissioning of obsolete applications and the rules that apply to them. This improves change management and ensures the organization’s security posture is optimally suited for later changes. At the same time, it provides comprehensive visibility that reduces oversight risks and gives security teams fewer unknowns to fear when decommissioning obsolete applications. Many organizations believe that Security stands in the way of the business – particularly when it comes to changing or provisioning connectivity for applications. It can take weeks, or even months to ensure that all the servers, devices, and network segments that support the application can communicate with each other while blocking access to hackers and unauthorized users. It’s a complex and intricate process. This is because, for every single application update or change, Networking and Security teams need to understand how it will affect the information flows between the various firewalls and servers the application relies on, and then change connectivity rules and security policies to ensure that only legitimate traffic is allowed, without creating security gaps or compliance violations. As a result, many enterprises manage security changes on an ad-hoc basis: they move quickly to address the immediate needs of high-profile applications or to resolve critical threats, but have little time left over to maintain network maps, document security policies, or analyze the impact of rule changes on applications. This reactive approach delays application releases, can cause outages and lost productivity, increases the risk of security breaches and puts the brakes on business agility. But it doesn’t have to be this way. Nor is it necessary for businesses to accept greater security risk to satisfy the demand for speed. Accelerating agility without sacrificing security The solution is to manage application connectivity and network security policies through a structured lifecycle methodology, which ensures that the right security policy management activities are performed in the right order, through an automated, repeatable process. This dramatically speeds up application connectivity provisioning and improves business agility, without sacrificing security and compliance. So, what is the network security policy management lifecycle, and how should network and security teams implement a lifecycle approach in their organizations? Discover and visualize The first stage involves creating an accurate, real-time map of application connectivity and the network topology across the entire organization, including on-premise, cloud, and software-defined environments. Without this information, IT staff are essentially working blind, and will inevitably make mistakes and encounter problems down the line. Security policy management solutions can automate the application connectivity discovery, mapping, and documentation processes across the thousands of devices on networks – a task that is enormously time-consuming and labor-intensive if done manually. In addition, the mapping process can help business and technical groups develop a shared understanding of application connectivity requirements. Plan and assess Once there is a clear picture of application connectivity and the network infrastructure, you can start to plan changes more effectively – ensure that proposed changes will provide the required connectivity, while minimizing the risks of introducing vulnerabilities, causing application outages, or compliance violations. Typically, it involves translating application connectivity requests into networking terminology, analyzing the network topology to determine if the changes are really needed, conducting an impact analysis of proposed rule changes (particularly valuable with unpredictable cloud-based applications), performing a risk and compliance assessment, and assessing inputs from vulnerabilities scanners and SIEM solutions. Automating these activities as part of a structured lifecycle keeps data up-to-date, saves time, and ensures that these critical steps are not omitted – helping avoid configuration errors and outages. Functions Of An Automatic Pool Cleaner An automatic pool cleaner is very useful for people who have a bad back and find it hard to manually operate the pool cleaner throughout the pool area. This type of pool cleaner can move along the various areas of a pool automatically. Its main function is to suck up dirt and other debris in the pool. It functions as a vacuum. Automatic pool cleaners may also come in different types and styles. These include automatic pressure-driven cleaners, automatic suction side-drive cleaners, and robotic pool cleaners. Migrate and deploy Deploying connectivity and security rules can be a labor-intensive and error-prone process. Security policy management solutions automate the critical tasks involved, including designing rule changes intelligently, automatically migrating rules, and pushing policies to firewalls and other security devices – all with zero-touch if no problems or exceptions are detected. Crucially, the solution can also validate that the intended changes have been implemented correctly. This last step is often neglected, creating the false impression that application connectivity has been provided, or that vulnerabilities have been removed, when in fact there are time bombs ticking in the network. Maintain Most firewalls accumulate thousands of rules which become outdated or obsolete over the years. Bloated rulesets not only add complexity to daily tasks such as change management, troubleshooting and auditing, but they can also impact the performance of firewall appliances, resulting in decreased hardware lifespan and increased TCO. Cleaning up and optimizing security policies on an ongoing basis can prevent these problems. This includes identifying and eliminating or consolidating redundant and conflicting rules; tightening overly permissive rules; reordering rules; and recertifying expired ones. A clean, well-documented set of security rules helps to prevent business application outages, compliance violations, and security gaps and reduces management time and effort. Decommission Every business application eventually reaches the end of its life: but when they are decommissioned, its security policies are often left in place, either by oversight or from fear that removing policies could negatively affect active business applications. These obsolete or redundant security policies increase the enterprise’s attack surface and add bloat to the firewall ruleset. The lifecycle approach reduces these risks. It provides a structured and automated process for identifying and safely removing redundant rules as soon as applications are decommissioned while verifying that their removal will not impact active applications or create compliance violations. We recently published a white paper that explains the five stages of the security policy management lifecycle in detail. It’s a great primer for any organization looking to move away from a reactive, fire-fighting response to security challenges, to an approach that addresses the challenges of balancing security and risk with business agility. Download your copy here . Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Removing insecure protocols In networks

    Insecure Service Protocols and Ports Okay, we all have them… they’re everyone’s dirty little network security secrets that we try not to... Risk Management and Vulnerabilities Removing insecure protocols In networks Matthew Pascucci 2 min read Matthew Pascucci Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 7/15/14 Published Insecure Service Protocols and Ports Okay, we all have them… they’re everyone’s dirty little network security secrets that we try not to talk about. They’re the protocols that we don’t mention in a security audit or to other people in the industry for fear that we’ll be publicly embarrassed. Yes, I’m talking about cleartext protocols which are running rampant across many networks. They’re in place because they work, and they work well, so no one has had a reason to upgrade them. Why upgrade something if it’s working right? Wrong. These protocols need to go the way of records, 8-tracks and cassettes (many of these protocols were fittingly developed during the same era). You’re putting your business and data at serious risk by running these insecure protocols. There are many insecure protocols that are exposing your data in cleartext, but let’s focus on the three most widely used ones: FTP, Telnet and SNMP. FTP (File Transfer Protocol) This is by far the most popular of the insecure protocols in use today. It’s the king of all cleartext protocols and one that needs to be smitten from your network before it’s too late. The problem with FTP is that all authentication is done in cleartext which leaves little room for the security of your data. To put things into perspective, FTP was first released in 1971, almost 45 years ago. In 1971 the price of gas was 40 cents a gallon, Disneyland had just opened and a company called FedEx was established. People, this was a long time ago. You need to migrate from FTP and start using an updated and more secure method for file transfers, such as HTTPS, SFTP or FTPS. These three protocols use encryption on the wire and during authentication to secure the transfer of files and login. Telnet If FTP is the king of all insecure file transfer protocols then telnet is supreme ruler of all cleartext network terminal protocols. Just like FTP, telnet was one of the first protocols that allowed you to remotely administer equipment. It became the defacto standard until it was discovered that it passes authentication using cleartext. At this point you need to hunt down all equipment that is still running telnet and replace it with SSH, which uses encryption to protect authentication and data transfer. This shouldn’t be a huge change unless your gear cannot support SSH. Many appliances or networking gear running telnet will either need the service enabled or the OS upgraded. If both of these options are not appropriate, you need to get new equipment, case closed. I know money is an issue at times, but if you’re running a 45 year old protocol on your network with the inability to update it, you need to rethink your priorities. The last thing you want is an attacker gaining control of your network via telnet. Its game over at this point. SNMP (Simple Network Management Protocol) This is one of those sneaky protocols that you don’t think is going to rear its ugly head and bite you, but it can! escortdate escorts . There are multiple versions of SNMP, and you need to be particularly careful with versions 1 and 2. For those not familiar with SNMP, it’s a protocol that enables the management and monitoring of remote systems. Once again, the strings can be sent via cleartext, and if you have access to these credentials you can connect to the system and start gaining a foothold on the network, including managing, applying new configurations or gaining in-depth monitoring details of the network. In short, it a great help for attackers if they can get hold of these credentials. Luckily version 3.0 of SNMP has enhanced security that protects you from these types of attacks. So you must review your network and make sure that SNMP v1 and v2 are not being used. These are just three of the more popular but insecure protocols that are still in heavy use across many networks today. By performing an audit of your firewalls and systems to identify these protocols, preferably using an automated tool such as AlgoSec Firewall Analyzer , you should be able to pretty quickly create a list of these protocols in use across your network. It’s also important to proactively analyze every change to your firewall policy (again preferably with an automated tool for security change management ) to make sure no one introduces insecure protocol access without proper visibility and approval. Finally, don’t feel bad telling a vendor or client that you won’t send data using these protocols. If they’re making you use them, there’s a good chance that there are other security issues going on in their network that you should be concerned about. It’s time to get rid of these protocols. They’ve had their usefulness, but the time has come for them to be sunset for good. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

bottom of page