top of page

Search results

623 results found with an empty search

  • AlgoSec | NGFW vs UTM: What you need to know

    Podcast: Differences between UTM and NGFW In our recent webcast discussion alongside panelists from Fortinet, NSS Labs and General... Firewall Change Management NGFW vs UTM: What you need to know Sam Erdheim 2 min read Sam Erdheim Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 2/19/13 Published Podcast: Differences between UTM and NGFW In our recent webcast discussion alongside panelists from Fortinet, NSS Labs and General Motors, we examined the State of the Firewall in 2013. We received more audience questions during the webcast than the time allowed for, so we’d like to answer these questions through several blog posts in a Q&A format with the panelists. By far the most asked question leading up to and during the webcast was: “What’s the difference between a UTM and a Next-Generation Firewall?” Here’s how our panelists responded: Pankil Vyas, Manager – Network Security Center, GM UTM are usually bundled feature set, NGFW has bundle but licensing can be selective. Depending on the firewall’s function on the network, some UTM features might not be useful, creating performance issues and sometimes firewall conflicts with packet flows. Nimmy Reichenberg, VP of Strategy, AlgoSec Different people give different answers to this question, but if we refer to Gartner who are certainly a credible source, a UTM consolidates many security functions (email security, AV, IPS, URL filtering etc.) and is tailored mostly to SMBs in terms of management capabilities, throughput, support, etc. A NGFW is an enterprise-grade product that at the very least includes IPS capabilities and application awareness (layer 7 control). You can refer to a Gartner paper titled “Defining the Next-Generation Firewall” for more information. Ryan Liles, Director of Testing Services, NSS Labs There really aren’t any differences in a UTM and a NGFW. The technologies used in the two are essentially the same, and they generally have the same capabilities. UTM devices are typically classified with lower throughput ratings than their NGFW counterparts, but for all practical purposes the differences are in marketing. The term NGFW was coined by vendors working with Gartner to create a class of products capable of fitting into an enterprise network that contained all of the features of a UTM. The reason for the name shift is that there was a pervasive line of thought stating a device capable of all of the functions of a UTM/NGFW would never be fast enough to run in an enterprise network. As hardware has progressed, the capability of these devices to hit multi-gigabit speeds began to prove that they were indeed capable of enterprise deployment. Rather than try and fight the sentiment that a UTM could never fit into an enterprise, the NGFW was born. Patrick Bedwell, VP of Products, Fortinet There are several definitions in the market of both terms. Analyst firms IDC and Gartner provided the original definitions of the terms. IDC defined UTM as a security appliance that combines firewall, gateway antivirus, and intrusion detection / intrusion prevention (IDS/IPS). Gartner defined an NGFW as a single device with integrated IPS with deep packet scanning, standard first-generation FW capabilities (NAT, stateful protocol inspection, VPN, etc.) and the ability to identity and control applications running on the network. Since their initial definitions, the terms have been used interchangeably by customers as well as vendors. Depending on with whom you speak, UTM can include NGFW features like application ID and control, and NGFW can include UTM features like gateway antivirus. The terms are often used synonymously, as both represent a single device with consolidated functionality. At Fortinet, for example, we offer customers the ability to deploy a FortiGate device as a pure firewall, an NGFW (enabling features like Application Control or User- and Device-based policy enforcement) or a full UTM (enabling additional features like gateway AV, WAN optimization, and so forth). Customers can deploy as much or as little of the technology on the FortiGate device as they need to match their requirements. If you missed the webcast, you can view it on-demand. We invite you to continue this debate and discussion by commenting here on the blog or via the Twitter hashtag Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | The importance of bridging NetOps and SecOps in network management

    Tsippi Dach, Director of Communications at AlgoSec, explores the relationship between NetOps and SecOps and explains why they are the... DevOps The importance of bridging NetOps and SecOps in network management Tsippi Dach 2 min read Tsippi Dach Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 4/16/21 Published Tsippi Dach, Director of Communications at AlgoSec, explores the relationship between NetOps and SecOps and explains why they are the perfect partnership The IT landscape has changed beyond recognition in the past decade or so. The vast majority of businesses now operate largely in the cloud, which has had a notable impact on their agility and productivity. A recent survey of 1,900 IT and security professionals found that 41 percent or organizations are running more of their workloads in public clouds compared to just one-quarter in 2019. Even businesses that were not digitally mature enough to take full advantage of the cloud will have dramatically altered their strategies in order to support remote working at scale during the COVID-19 pandemic. However, with cloud innovation so high up the boardroom agenda, security is often left lagging behind, creating a vulnerability gap that businesses can little afford in the current heightened risk landscape. The same survey found the leading concern about cloud adoption was network security (58%). Managing organizations’ networks and their security should go hand-in-hand, but, as reflected in the survey, there’s no clear ownership of public cloud security. Responsibility is scattered across SecOps, NOCs and DevOps, and they don’t collaborate in a way that aligns with business interests. We know through experience that this siloed approach hurts security, so what should businesses do about it? How can they bridge the gap between NetOps and SecOps to keep their network assets secure and prevent missteps? Building a case for NetSecOps Today’s digital infrastructure demands the collaboration, perhaps even the convergence, of NetOps and SecOps in order to achieve maximum security and productivity. While the majority of businesses do have open communication channels between the two departments, there is still a large proportion of network and security teams working in isolation. This creates unnecessary friction, which can be problematic for service-based businesses that are trying to deliver the best possible end-user experience. The reality is that NetOps and SecOps share several commonalities. They are both responsible for critical aspects of a business and have to navigate constantly evolving environments, often under extremely restrictive conditions. Agility is particularly important for security teams in order for them to keep pace with emerging technologies, yet deployments are often stalled or abandoned at the implementation phase due to misconfigurations or poor execution. As enterprises continue to deploy software-defined networks and public cloud architecture, security has become even more important to the network team, which is why this convergence needs to happen sooner rather than later. We somehow need to insert the network security element into the NetOps pipeline and seamlessly make it just another step in the process. If we had a way to automatically check whether network connectivity is already enabled as part of the pre-delivery testing phase, that could, at least, save us the heartache of deploying something that will not work. Thankfully, there are tools available that can bring SecOps and NetOps closer together, such as Cisco ACI , Cisco Secure Workload and AlgoSec Security Management Solution . Cisco ACI, for instance, is a tightly coupled policy-driven solution that integrates software and hardware, allowing for greater application agility and data center automation. Cisco Secure Workload (previously known as Tetration), is a micro-segmentation and cloud workload protection platform that offers multi-cloud security based on a zero-trust model. When combined with AlgoSec, Cisco Secure Workload is able to map existing application connectivity and automatically generate and deploy security policies on different network security devices, such as ACI contract, firewalls, routers and cloud security groups. So, while Cisco Secure Workload takes care of enforcing security at each and every endpoint, AlgoSec handles network management. This is NetOps and SecOps convergence in action, allowing for 360-degree oversight of network and security controls for threat detection across entire hybrid and multi-vendor frameworks. While the utopian harmony of NetOps and SecOps may be some way off, using existing tools, processes and platforms to bridge the divide between the two departments can mitigate the ‘silo effect’ resulting in stronger, safer and more resilient operations. We recently hosted a webinar with Doug Hurd from Cisco and Henrik Skovfoged from Conscia discussing how you can bring NetOps and SecOps teams together with Cisco and AlgoSec. You can watch the recorded session here . Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | The Comprehensive 9-Point AWS Security Checklist

    A practical AWS security checklist will help you identify and address vulnerabilities quickly. In the process, ensure your cloud security... Cloud Security The Comprehensive 9-Point AWS Security Checklist Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 2/20/23 Published A practical AWS security checklist will help you identify and address vulnerabilities quickly. In the process, ensure your cloud security posture is up-to-date with industry standards. This post will walk you through an 8-point AWS security checklist. We’ll also share the AWS security best practices and how to implement them. The AWS shared responsibility model AWS shared responsibility model is a paradigm that describes how security duties are split between AWS and its clients. This approach considers AWS a provider of cloud security architecture. And customers still protect their individual programs, data, and other assets. AWS’s Responsibility According to this model, AWS maintains the safety of the cloud structures. This encompasses the network, the hypervisor, the virtualization layer, and the physical protection of data centers. AWS also offers clients a range of safety precautions and services. They include surveillance tools, a load balancer, access restrictions, and encryption. Customer Responsibility As a customer, you are responsible for setting up AWS security measures to suit your needs. You also do this to safeguard your information, systems, programs, and operating systems. Customer responsibility entails installing reasonable access restrictions and maintaining user profiles and credentials. You can also watch for security issues in your work setting. Let’s compare the security responsibilities of AWS and its customers in a table: Comprehensive 8-point AWS security checklist 1. Identity and access management (IAM) 2. Logical access control 3. Storage and S3 4. Asset management 5. Configuration management. 6. Release and deployment management 7. Disaster recovery and backup 8. Monitoring and incidence management Identity and access management (IAM) IAM is a web service that helps you manage your company’s AWS access and security. It allows you to control who has access to your resources or what they can do with your AWS assets. Here are several IAM best practices: Replace access keys with IAM roles. Use IAM roles to provide AWS services and apps with the necessary permissions. Ensure that users only have permission to use the resources they need. Do this by implementing the concept of least privilege . Whenever communicating between a client and an ELB, use secure SSL versions. Use IAM policies to specify rights for user groups and centralized access management. Use IAM password policies to impose strict password restrictions on all users. Logical access control Logical access control involves controlling who accesses your AWS resources. This step also entails deciding the types of actions that users can perform on the resources. You can do this by allowing or denying access to specific people based on their position, job function, or other criteria. Logical access control best practices include the following: Separate sensitive information from less-sensitive information in systems and data using network partitioning Confirm user identity and restrict the usage of shared user accounts. You can use robust authentication techniques, such as MFA and biometrics. Protect remote connectivity and keep offsite access to vital systems and data to a minimum by using VPNs. Track network traffic and spot shady behavior using the intrusion detection and prevention systems (IDS/IPS). Access remote systems over unsecured networks using the secure socket shell (SSH). Storage and S3 Amazon S3 is a scalable object storage service where data may be stored and retrieved. The following are some storage and S3 best practices: Classify the data to determine access limits depending on the data’s sensitivity. Establish object lifecycle controls and versioning to control data retention and destruction. Use the Amazon Elastic Block Store (Amazon EBS) for this process. Monitor the storage and audit accessibility to your S3 buckets using Amazon S3 access logging. Handle encryption keys and encrypt confidential information in S3 using the AWS Key Management Service (KMS). Create insights on the current state and metadata of the items stored in your S3 buckets using Amazon S3 Inventory. Use Amazon RDS to create a relational database for storing critical asset information. Asset management Asset management involves tracking physical and virtual assets to protect and maintain them. The following are some asset management best practices: Determine all assets and their locations by conducting routine inventory evaluations. Delegate ownership and accountability to ensure each item is cared for and kept safe. Deploy conventional and digital safety safeguards to stop illegal access or property theft. Don’t use expired SSL/TLS certificates. Define standard settings to guarantee that all assets are safe and functional. Monitor asset consumption and performance to see possible problems and possibilities for improvement. Configuration management. Configuration management involves monitoring and maintaining server configurations, software versions, and system settings. Some configuration management best practices are: Use version control systems to handle and monitor modifications. These systems can also help you avoid misconfiguration of documents and code . Automate configuration updates and deployments to decrease user error and boost consistency. Implement security measures, such as firewalls and intrusion sensing infrastructure. These security measures will help you monitor and safeguard setups. Use configuration baselines to design and implement standard configurations throughout all platforms. Conduct frequent vulnerability inspections and penetration testing. This will enable you to discover and patch configuration-related security vulnerabilities. Release and deployment management Release and deployment management involves ensuring the secure release of software and systems. Here are some best practices for managing releases and deployments: Use version control solutions to oversee and track modifications to software code and other IT resources. Conduct extensive screening and quality assurance (QA) processes. Do this before publishing and releasing new software or updates. Use automation technologies to organize and distribute software upgrades and releases. Implement security measures like firewalls and intrusion detection systems. Disaster recovery and backup Backup and disaster recovery are essential elements of every organization’s AWS environment. AWS provides a range of services to assist clients in protecting their data. The best practices for backup and disaster recovery on AWS include: Establish recovery point objectives (RPO) and recovery time objectives (RTO). This guarantees backup and recovery operations can fulfill the company’s needs. Archive and back up data using AWS products like Amazon S3, flow logs, Amazon CloudFront and Amazon Glacier. Use AWS solutions like AWS Backup and AWS Disaster Recovery to streamline backup and recovery. Use a backup retention policy to ensure that backups are stored for the proper amount of time. Frequently test backup and recovery procedures to ensure they work as intended. Redundancy across many regions ensures crucial data is accessible during a regional outage. Watch for problems that can affect backup and disaster recovery procedures. Document disaster recovery and backup procedures. This ensures you can perform them successfully in the case of an absolute disaster. Use encryption for backups to safeguard sensitive data. Automate backup and recovery procedures so human mistakes are less likely to occur. Monitoring and incidence management Monitoring and incident management enable you to track your AWS environment and respond to any issues. Amazon web services monitoring and incident management best practices include: Monitoring API traffic and looking for any security risks with AWS CloudTrail. Use AWS CloudWatch to track logs, performance, and resource usage. Set up modifications to AWS resources and monitor for compliance problems using AWS Config. Combine and rank security warnings from various AWS user accounts and services using AWS Security groups. Using AWS Lambda and other AWS services to implement automated incident response procedures. Establish a plan for responding to incidents that specify roles and obligations and define a clear escalation path. Exercising incident response procedures frequently to make sure the strategy works. Checking for flaws in third-party applications and applying quick fixes. The use of proactive monitoring to find possible security problems before they become incidents. Train your staff on incident response best practices. This way, you ensure that they’ll respond effectively in case of an incident. Top challenges of AWS security DoS attacks A Distributed denial of service (DDoS) attack poses a huge security risk to AWS systems. It involves an attacker bombarding a network with traffic from several sources. In the process, straining its resources and rendering it inaccessible to authorized users. To minimize this sort of danger, your DevOps should have a thorough plan to mitigate this sort of danger. AWS offers tools and services, such as AWS Shield, to assist fight against DDoS assaults. Outsider AWS compromise. Hackers can use several strategies to get illegal access to your AWS account. For example, they may use psychological manipulation or exploit software flaws. Once outsiders gain access, they may use data outbound techniques to steal your data. They can also initiate attacks on other crucial systems. Insider threats Insiders with permission to access your AWS resources often pose a huge risk. They can damage the system by modifying or stealing data and intellectual property. Only grant access to authorized users and limit the access level for each user. Monitor the system and detect any suspicious activities in real-time. Root account access The root account has complete control over an AWS account and has the highest degree of access.Your security team should access the root account only when necessary. Follow AWS best practices when assigning root access to IAM users and parties. This way, you can ensure that only those who should have root access can access the server. Security best practices when using AWS Set strong authentication policies. A key element of AWS security is a strict authentication policy. Implement password rules, demanding solid passwords and frequent password changes to increase security. Multi-factor authentication (MFA) is a recommended security measure for access control. It involves a user providing two or more factors, such as an ID, password, and token code, to gain access. Using MFA can improve the security of your account. It can also limit access to resources like Amazon Machine Images (AMIs). Differentiate security of cloud vs. in cloud Do you recall the AWS cloud shared responsibility model? The customer handles configuring and managing access to cloud services. On the other hand, AWS provides a secure cloud infrastructure. It provides physical security controls like firewalls, intrusion detection systems, and encryption. To secure your data and applications, follow the AWS shared responsibility model. For example, you can use IAM roles and policies to set up virtual private cloud VPCs. Keep compliance up to date AWS provides several compliance certifications for HIPAA, PCI DSS, and SOC 2. The certifications are essential for ensuring your organization’s compliance with industry standards. While NIST doesn’t offer certifications, it provides a framework to ensure your security posture is current. AWS data centers comply with NIST security guidelines. This allows customers to adhere to their standards. You must ensure that your AWS setup complies with all legal obligations as an AWS client. You do this by keeping up with changes to your industry’s compliance regulations. You should consider monitoring, auditing, and remedying your environment for compliance. You can use services offered by AWS, such as AWS Config and AWS CloudTrail log, to perform these tasks. You can also use Prevasio to identify and remediate non-compliance events quickly. It enables customers to ensure their compliance with industry and government standards. The final word on AWS security You need a credible AWS security checklist to ensure your environment is secure. Cloud Security Posture Management solutions produce AWS security checklists. They provide a comprehensive report to identify gaps in your security posture and processes for closing them. With a CSPM tool like Prevasio , you can audit your AWS environment. And identify misconfigurations that may lead to vulnerabilities. It comes with a vulnerability assessment and anti-malware scan that can help you detect malicious activities immediately. In the process, your AWS environment becomes secure and compliant with industry standards. Prevasio comes as cloud native application protection platform (CNAPP). It combines CSPM, CIEM and all the other important cloud security features into one tool. This way, you’ll get better visibility of your cloud security on one platform. Try Prevasio today ! Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Managing the switch – Making the move to Cisco Meraki

    Challenges with managing Cisco Meraki in a complex enterprise environment We have worked closely with Cisco for many years in large... Application Connectivity Management Managing the switch – Making the move to Cisco Meraki Jeremiah Cornelius 2 min read Jeremiah Cornelius Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 1/4/24 Published Challenges with managing Cisco Meraki in a complex enterprise environment We have worked closely with Cisco for many years in large complex environments and have developed integrations to support a variety of Cisco solutions for our joint customers. In recent years we have seen an increased interest in the use of Cisco Meraki devices by enterprises that are also AlgoSec customers. In this post, we will highlight some of the AlgoSec capabilities that can quickly add value for Meraki customers. Meeting the Enterprise The Cisco Meraki MX is a multifunctional security and SD-WAN enterprise appliance with a wide set of capabilities to address multiple use cases—from an all-in-one device. Organizations across all industries rely on the MX to deliver secure connectivity to hub locations or multi cloud environments. The MX is 100% cloud-managed, so installation and remote management are truly zero-touch, making it ideal for distributed branches, campuses, and data center locations. In our talks with AlgoSec customers and partner architects, it is evident that the benefits that originally made Meraki MX popular in commercial deployments were just as appealing to enterprises. Many enterprises are now faced with waves of expansion in employees working from home, and burgeoning demands for scalable remote access – along with increasing network demands by regional centers. The leader of one security team I spoke with put it very well, “We are deploying to 1,200 locations in four global regions, planned to be 1,500 by year’s end. The choice of Meraki is for us a ‘no-brainer.’ If you haven’t already, I know that you’re going to see this become a more popular option with many big operations.” Natural Companions – AlgoSec ASMS and Cisco Meraki-MX This is a natural situation to meet enhanced requirements with AlgoSec ASMS — reinforcing Meraki’s impressive capabilities and scale as a combined, enterprise-class solution. ASMS brings to the table traffic planning and visualization, rules optimization and management, and a solution to address enterprise-level requirements for policy reporting and compliance auditing. In AlgoSec, we’re proud of AlgoSec FireFlow’s ability to model the security-connected state of any given endpoints across an entire enterprise. Now our customers with Meraki MX can extend this technology that they know and trust, analyze real traffic in complex deployments, and acquire an understanding of the requirements and impact of changes delivered to their users and applications that are connected by Meraki deployments. As it’s unlikely that your needs, or those of any data center and enterprise, are met by a single vendor and model, AlgoSec unifies operations of the Meraki-MX with those of the other technologies, such as enterprise NGFW and software-defined network fabrics. Our application-centric approach means that Meraki MX can be a component in delivering solutions for zero-trust and microsegmentation with other Cisco technology like Cisco ACI, and other third parties. Cisco Meraki– Product Demo If all of this sounds interesting, take a look for yourself to see how AlgoSec helps with common challenges in these enterprise environments. More Where This Came From The AlgoSec integration with Cisco Meraki-MX is delivering solutions our customers want. If you want to discover more about the Meraki and AlgoSec joint solution, contact us at AlgoSec! We work together with Cisco teams and resellers and will be glad to schedule a meeting to share more details or walk through a more in depth demo. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Errare humanum est

    Nick Ellsmore is an Australian cybersecurity professional whose thoughts on the future of cybersecurity are always insightful. Having a... Cloud Security Errare humanum est Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/25/21 Published Nick Ellsmore is an Australian cybersecurity professional whose thoughts on the future of cybersecurity are always insightful. Having a deep respect for Nick, I really enjoyed listening to his latest podcast “Episode 79 Making the cyber sector redundant with Nick Ellsmore” . As Nick opened the door to debate on “all the mildly controversial views” he has put forward in the podcast, I decided to take a stab at a couple of points made by Nick. For some mysterious reason, these points have touched my nerve. So, here we go. Nick: The cybersecurity industry, we spent so long trying to get people to listen to us and take the issue seriously, you know, we’re now getting that, you know. Are the businesses really responding because we were trying to get people to listen to us? Let me rephrase this question. Are the businesses really spending more on cybersecurity because we were trying to get people to listen to us? The “cynical me” tells me No. Businesses are spending more on cybersecurity because they are losing more due to cyber incidents. It’s not the number of incidents; it’s their impact that is increasingly becoming devastating. Over the last ten years, there were plenty of front-page headliners that shattered even seemingly unshakable businesses and government bodies. Think of Target attack in 2013, the Bank of Bangladesh heist in 2016, Equifax breach in 2017, SolarWinds hack in 2020 .. the list goes on. We all know how Uber tried to bribe attackers to sweep the stolen customer data under the rug. But how many companies have succeeded in doing so without being caught? How many cyber incidents have never been disclosed? These headliners don’t stop. Each of them is another reputational blow, impacted stock options, rolled heads, stressed-out PR teams trying to play down the issue, knee-jerk reaction to acquire snake-oil-selling startups, etc. We’re not even talking about skewed election results (a topic for another discussion). Each one of them comes at a considerable cost. So no wonder many geniuses now realise that spending on cybersecurity can actually mitigate those risks. It’s not our perseverance that finally started paying off. It’s their pockets that started hurting. Nick: I think it’s important that we don’t lose sight of the fact that this is actually a bad thing to have to spend money on. Like, the reason that we’re doing this is not healthy. .. no one gets up in the morning and says, wow, I can’t wait to, you know, put better locks on my doors. It’s not the locks we sell. We sell gym membership. We want people to do something now to stop bad things from happening in the future. It’s a concept of hygiene, insurance, prevention, health checks. People are free not to pursue these steps, and run their business the way they used to .. until they get hacked, get into the front page, wondering first “Why me?” and then appointing a scapegoat. Nick: And so I think we need to remember that, in a sense, our job is to create the entire redundancy of this sector. Like, if we actually do our job, well, then we all have to go and do something else, because security is no longer an issue. It won’t happen due to 2 main reasons. Émile Durkheim believed in a “society of saints”. Unfortunately, it is a utopia. Greed, hunger, jealousy, poverty are the never-ending satellites of the human race that will constantly fuel crime. Some of them are induced by wars, some — by corrupt regimes, some — by sanctions, some — by imperfect laws. But in the end — there will always be Haves and Have Nots, and therefore, fundamental inequality. And that will feed crime. “Errare humanum est” , Seneca. To err is human. Because of human errors, there will always be vulnerabilities in code. Because of human nature (and as its derivative, geopolitical or religious tension, domination, competition, nationalism, fight for resources), there will always be people willing to and capable of exploiting those vulnerabilities. Mix those two ingredients — and you get a perfect recipe for cybercrime. Multiply that with never-ending computerisation, automation, digital transformation, and you get a constantly growing attack surface. No matter how well we do our job, we can only control cybercrime and keep the lid on it, but we can’t eradicate it. Thinking we could would be utopic. Another important consideration here is budget constraints. Building proper security is never fun — it’s a tedious process that burns cash but produces no tangible outcome. Imagine a project with an allocated budget B to build a product P with a feature set F, in a timeframe T. Quite often, such a project will be underfinanced, potentially leading to a poor choice of coders, overcommitted promises, unrealistic expectations. Eventually leading to this (oldie, but goldie): Add cybersecurity to this picture, and you’ll get an extra step that seemingly complicates everything even further: The project investors will undoubtedly question why that extra step was needed. Is there a new feature that no one else has? Is there a unique solution to an old problem? None of that? Then what’s the justification for such over-complication? Planning for proper cybersecurity built-in is often perceived as FUD. If it’s not tangible, why do we need it? Customers won’t see it. No one will see it. Scary stories in the press? Nah, that’ll never happen to us. In some way, extra budgeting for cybersecurity is anti-capitalistic in nature. It increases the product cost and, therefore, its price, making it less competitive. It defeats the purpose of outsourcing product development, often making outsourcing impossible. From the business point of view, putting “Sec” into “DevOps” does not make sense. That’s Ok. No need. .. until it all gloriously hits the fan, and then we go back to STEP 1. Then, maybe, just maybe, the customer will say, “If we have budgeted for that extra step, then maybe we would have been better off”. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | 4 tips to manage your external network connections

    Last week our CTO, Professor Avishai Wool, presented a technical webinar on the do’s and don’ts for managing external connectivity to and... Auditing and Compliance 4 tips to manage your external network connections Joanne Godfrey 2 min read Joanne Godfrey Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 8/10/15 Published Last week our CTO, Professor Avishai Wool, presented a technical webinar on the do’s and don’ts for managing external connectivity to and from your network . We kicked off our webinar by polling the audience (186 people) on how many external permanent connections into their enterprise network they have. 40% have less than 50 external connections 31% have 50-250 external connections 24% have more than 250 external connections 5% wish they knew how many external connections they have! Clearly this is a very relevant issue for many enterprises, and one which can have a profound effect on security. The webinar covered a wide range of best practices for managing the external connectivity lifecycle and I highly recommend that you view the full presentation. But in the meantime, here are a few key issues that you should be mindful of when considering how to manage external connectivity to and from your network: Network Segmentation While there has to be an element of trust when you let an external partner into your network, you must do all you can to protect your organization from attacks through these connections. These include placing your servers in a demilitarized zone (DMZ), segregating them by firewalls, restricting traffic in both directions from the DMZ as well as using additional controls such as web application firewalls, data leak prevention and intrusion detection. Regulatory Compliance Bear in mind that if the data being accessed over the external connection is regulated, both your systems and the related peer’s systems are now subject t. So if the network connection touches credit card data, both sides of the connection are in scope, and outsourcing the processing and management of regulated data to a partner does not let you off the hook. Maintenance Sometimes you will have to make changes to your external connections, either due to planned maintenance work by your IT team or the peer’s team, or as a result of unplanned outages. Dealing with changes that affect external connections is more complicated than internal maintenance, as it will probably require coordinating with people outside your organisation and tweaking existing workflows, while adhering to any contractual or SLA obligations. As part of this process, remember that you’ll need to ensure that your information systems allow your IT teams to recognize external connections and provide access to the relevant technical information in the contract, while supporting the amended workflows. Contracts In most cases there is a contract that governs all aspects of the external connection – including technical and business issues. The technical points will include issues such as IP addresses and ports, technical contact points, SLAs, testing procedures and the physical location of servers. It’s important, therefore, that this contract is adhered to whenever dealing with technical issues related to external connections. These are just a few tips and issues to be aware of. To watch the webinar from Professor Wool in full, check out the recording here . Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Cloud Application Security: Threats, Benefits, & Solutions

    As your organization adopts a hybrid IT infrastructure, there are more ways for hackers to steal your sensitive data. This is why cloud... Cloud Security Cloud Application Security: Threats, Benefits, & Solutions Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 6/29/23 Published As your organization adopts a hybrid IT infrastructure, there are more ways for hackers to steal your sensitive data. This is why cloud application security is a critical part of data protection. It allows you to secure your cloud-based applications from cyber threats while ensuring your data is safe. This post will walk you through cloud application security, including its importance. We will also discuss the main cloud application security threats and how to mitigate them. What is Cloud Application Security Cloud application security refers to the security measures taken to protect cloud-based assets throughout their development lifecycle. These security measures are a framework of policies, tools, and controls that protect your cloud against cyber threats. Here is a list of security measures that cloud application security may involve: Compliance with industry standards such as CIS benchmarks to prevent data breaches. Identity management and access controls to prevent unauthorized access to your cloud-based apps. Data encryption and tokenization to protect sensitive data. Vulnerability management through vulnerability scanning and penetration testing. Network perimeter security, such as firewalls, to prevent unwanted access. The following are some of the assets that cloud security affects: Third-party cloud providers like Amazon AWS, Microsoft Azure, and Google GCP. Collaborative applications like Slack and Microsoft Teams. Data Servers. Computer Networks. Why is Cloud Application Security Important Cloud application security is becoming more relevant as businesses migrated their data to the cloud in recent years. This is especially true for companies with a multi-cloud environment. These types of environments create a larger attack surface for hackers to exploit. According to IBM , the cost of a data breach in 2022 was $4.35 million. And this represents an increase of 2.6% from the previous year. The report also revealed that it took an average of 287 days to find and stop a data breach in a cloud environment. This time is enough for hackers to steal sensitive data and really damage your assets. Here are more things that can go wrong if organizations don’t pay attention to cloud security: Brand image damage: A security breach may cause a brand’s reputation to suffer and a decline in client confidence. During a breach, your company’s servers may be down for days or weeks. This means customers who paid for your services will not get access in that time. They may end up destroying your brand’s image through word of mouth. Lost consumer trust: Consumer confidence is tough to restore after being lost due to a security breach. Customers could migrate to rivals they believe to be more secure. Organizational disruption: A security breach may cause system failures preventing employees from working. This, in turn, could affect their productivity. You may also have to fire employees tasked with ensuring cloud security. Data loss: You may lose sensitive data, such as client information, resulting in legal penalties. Trade secrets theft may also affect the survival of your organization. Your competitors may steal your only leverage in the industry. Compliance violations: You may be fined for failing to comply with industry regulations such as GDPR. You may also face legal consequences for failing to protect consumer data. What are the Major Cloud Application Security Threats The following is a list of the major cloud application security threats: Misconfigurations: Misconfigurations are errors made when setting up cloud-based applications. They can occur due to human errors, lack of expertise, or mismanagement of cloud resources. Examples include weak passwords, unsecured storage baskets, and unsecured ports. Hackers may use these misconfigurations to access critical data in your public cloud. Insecure data sharing: This is the unauthorized or unintended sharing of sensitive data between users. Insecure data sharing can happen due to a misconfiguration or inappropriate access controls. It can lead to data loss, breaches, and non-compliance with regulatory standards. Limited visibility into network operations: This is the inability to monitor and control your cloud infrastructure and its apps. Limited network visibility prevents you from quickly identifying and responding to cyber threats. Many vulnerabilities may go undetected for a long time. Cybercriminals may exploit these weak points in your network security and gain access to sensitive data. Account hijacking: This is a situation where a hacker gains unauthorized access to a legitimate user’s cloud account. The attackers may use various social engineering tactics to steal login credentials. Examples include phishing attacks, password spraying, and brute-force attacks. Once they access the user’s cloud account, they can steal data or damage assets from within. Employee negligence and inadequately trained personnel: This threat occurs when employees are not adequately trained to recognize, report and prevent cyber risks. It can also happen when employees unintentionally or intentionally engage in risky behavior. For example, they could share login credentials with unauthorized users or set weak passwords. Weak passwords enable attackers to gain entry into your public cloud. Rogue employees can also intentionally give away your sensitive data. Compliance risks: Your organization faces cloud computing risks when non-compliant with industry regulations such as GDPR, PCI-DSS, and HIPAA. Some of these cloud computing risks include data breaches and exposure of sensitive information. This, in turn, may result in fines, legal repercussions, and reputational harm. Data loss: Data loss is a severe security risk for cloud applications. It may happen for several causes, including hardware malfunction, natural calamities, or cyber-attacks. Some of the consequences of data loss may be the loss of customer trust and legal penalties. Outdated security software: SaaS vendors always release updates to address new vulnerabilities and threats. Failing to update your security software on a regular basis may leave your system vulnerable to cyber-attacks. Hackers may exploit the flaws in your outdated SaaS apps to gain access to your cloud. Insecure APIs: APIs are a crucial part of cloud services but can pose a severe security risk if improperly secured. Insecure APIs and other endpoint infrastructure may cause many severe system breaches. They can lead to a complete system takeover by hackers and elevated privileged access. How to Mitigate Cloud Application Security Risks The following is a list of measures to mitigate cloud app security risks: Conduct a thorough risk analysis: This entails identifying possible security risks and assessing their potential effects. You then prioritize correcting the risks depending on their level of severity. By conducting risk analysis on a regular basis, you can keep your cloud environment secure. You’ll quickly understand your security posture and select the right security policies. Implement a firm access control policy: Access control policies ensure that only authorized users gain access to your data. They also outline the level of access to sensitive data based on your employees’ roles. A robust access control policy comprises features such as: Multi-factor authentication Role-based access control Least Privilege Access Strong password policies. Use encryption: Encryption is a crucial security measure that protects sensitive data in transit and at rest. This way, if an attacker intercepts data in transit, it will only be useful if they have a decryption key. Some of the cloud encryption solutions you can implement include: Advanced Encryption Standard (AES) Rivest -Shamir-Addleman (RSA) Transport Layer Security (TSL) Set up data backup and disaster recovery policies: A data backup policy ensures data is completely recovered in case of breaches. You can always recover the lost data from your data backup files. Data backup systems also help reduce the impact of cyberattacks as you will restore normal operations quickly. Disaster recovery policies focus on establishing protocols and procedures to restore critical systems during a major disaster. This way, your data security will stay intact even when disaster strikes. Keep a constant watch over cloud environments: Security issues in cloud settings can only be spotted through continuous monitoring. Cloud security posture management tools like Prevasio can help you monitor your cloud for such issues. With its layer analysis feature, you’ll know the exact area in your cloud and how to fix it. Test and audit cloud security controls regularly: Security controls help you detect and mitigate potential security threats in your cloud. Examples of security controls include firewalls, intrusion detection systems, and database encryption. Auditing these security controls helps to identify gaps they may have. And then you take corrective actions to restore their effectiveness. Regularly evaluating your security controls will reduce the risk of security incidents in your cloud. Implement a security awareness training program: Security awareness training helps educate employees on cloud best practices. When employees learn commonly overlooked security protocols, they reduce the risks of data breaches due to human error. Organize regular assessment tests with your employees to determine their weak points. This way, you’ll reduce chances of hackers gaining access to your cloud through tactics such as phishing and ransomware attacks. Use the security tools and services that cloud service providers offer: Cloud service providers like AWS, Azure, and Google Cloud Platform (GCP) offer security tools and services such as: Web application firewalls (WAF), Runtime application self-protection (RASP), Intrusion detection and prevention systems Identity and access management (IAM) controls You can strengthen the security of your cloud environments by utilizing these tools. However, you should not rely solely on these features to ensure a secure cloud. You also need to implement your own cloud security best practices. Implement an incident response strategy: A security incident response strategy describes the measures to take during a cyber attack. It provides the procedures and protocols to bring the system back to normal in case of a breach. Designing incident response plans helps to reduce downtime. It also minimizes the impact of the damages due to cyber attacks. Apply the Paved Road Security Approach in DevSecOps Processes: DevSecOps environments require security to be integrated into development workflows and tools. This way, cloud security becomes integral to an app development process. The paved road security approach provides a secure baseline that DevSecOps can use for continuous monitoring and automated remediation. Automate your cloud application security practices Using on-premise security practices such as manual compliance checks to mitigate cloud application security threats can be tiring. Your security team may also need help to keep up with the updates as your cloud needs grow. Cloud vendors that can automate all the necessary processes to maintain a secure cloud. They have cloud security tools to help you achieve and maintain compliance with industry standards. You can improve your visibility into your cloud infrastructures by utilizing these solutions. They also spot real-time security challenges and offer remediations. For example, Prevasio’s cloud security solutions monitor cloud environments continually from the cloud. They can spot possible security threats and vulnerabilities using AI and machine learning. What Are Cloud Application Security Solutions? Cloud application security solutions are designed to protect apps and other assets in the cloud. Unlike point devices, cloud application security solutions are deployed from the cloud. This ensures you get a comprehensive cybersecurity approach for your IT infrastructure. These solutions are designed to protect the entire system instead of a single point of vulnerability. This makes managing your cybersecurity strategy easier. Here are some examples of cloud security application solutions: 1. Cloud Security Posture Management (CSPM) : CSPM tools enable monitoring and analysis of cloud settings for security risks and vulnerabilities. They locate incorrect setups, resources that aren’t compliant, and other security concerns that might endanger cloud infrastructures. 2. The Cloud Workload Protection Platform (CWPP) : This cloud application security solution provides real-time protection for workloads in cloud environments . It does this by detecting and mitigating real-time threats regardless of where they are deployed. CWPP solutions offer various security features, such as: Network segmentation File integrity monitoring Vulnerability scanning. Using CWPP products will help you optimize your cloud application security strategy. 3. Cloud Access Security Broker (CASB) : CASB products give users visibility into and control over the data and apps they access in the cloud. These solutions help businesses enforce security guidelines and monitor user behavior in cloud settings. The danger of data loss, leakage, and unauthorized access is lowered in the process. CASB products also help with malware detection. 4. Runtime Application Self Protection (RASP): This solution addresses security issues that may arise while a program is working. It identifies potential threats and vulnerabilities during runtime and thwarts them immediately. Some of the RASP solutions include: Input validation Runtime hardening Dynamic Application Security testing 5. Web Application and API protection (WAAP) : These products are designed to protect your organization’s Web applications and APIs. They monitor outgoing and incoming web apps and API traffic to detect malicious activity. WAAP products can block any unauthorized access attempts. They can also protect against cyber threats like SQL injection and Cross-site scripting. 6. Data Loss Prevention (DLP): DLP products are intended to stop the loss or leaking of private information in cloud settings. These technologies keep track of sensitive data in use and at rest. They can also enforce rules to stop unauthorized people from losing or accessing it. 7. Security Information and Event Management (SIEM) systems : SIEM systems track and analyze real-time security incidents and events in cloud settings. The effect of security breaches is decreased thanks to these solutions. They help firms in detecting and responding to security issues rapidly. Cloud Native Application Protection Platform (CNAPP) The CNAPP, which Prevasio created, raises the bar for cloud security. It combines CSPM, CIEM, IAM, CWPP, and more in one tool. A CNAPP delivers a complete security solution with sophisticated threat detection and mitigation capabilities for packaged workloads, microservices, and cloud-native applications. The CNAPP can find and eliminate security issues in your cloud systems before hackers can exploit them. With its layer analysis feature, you can quickly fix any potential vulnerabilities in your cloud . It pinpoints the exact layer of code where there are errors, saving you time and effort. CNAPP also offers a visual dynamic analysis of your cloud environment . This lets you grasp the state of your cloud security at a glance. In the process, saving you time as you know exactly where to go. CNAPP is also a scalable cloud security solution. The cloud-native design of Prevasio’s CNAPP enables it to expand dynamically and offer real-time protection against new threats. Let Prevasio Solve Your Cloud Application Security Needs Cloud security is paramount to protecting sensitive data and upholding a company’s reputation in the modern digital age. To be agile to the constantly changing security issues in cloud settings, Prevasio’s Cloud Native Application Protection Platform (CNAPP) offers an all-inclusive solution. From layer analysis to visual dynamic analysis, CNAPP gives you the tools you need to keep your cloud secure. You can rely on Prevasio to properly manage your cloud application security needs. Try Prevasio today! Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Sunburst Backdoor, Part II: DGA & The List of Victims

    Previous Part of the analysis is available here. Next Part of the analysis is available here. Update from 19 December 2020: ‍Prevasio... Cloud Security Sunburst Backdoor, Part II: DGA & The List of Victims Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/17/20 Published Previous Part of the analysis is available here . Next Part of the analysis is available here . Update from 19 December 2020: Prevasio would like to thank Zetalytics for providing us with an updated (larger) list of passive (historic) DNS queries for the domains generated by the malware. As described in the first part of our analysis, the DGA (Domain Generation Algorithm) of the Sunburst backdoor produces a domain name that may look like: fivu4vjamve5vfrtn2huov[.]appsync-api.us-west-2[.]avsvmcloud[.]com The first part of the domain name (before the first dot) consists of a 16-character random string, appended with an encoded computer’s domain name. This is the domain in which the local computer is registered. From the example string above, we can conclude that the encoded computer’s domain starts from the 17th character and up until the dot (highlighted in yellow): fivu4vjamve5vfrt n2huov In order to encode a local computer’s domain name, the malware uses one of 2 simple methods: Method 1 : a substitution table, if the domain name consists of small letters, digits, or special characters ‘-‘, ‘_’, ‘.’ Method 2 : base64 with a custom alphabet, in case of capital letters present in the domain name Method 1 In our example, the encoded domain name is “n2huov” . As it does not have any capital letters, the malware encodes it with a substitution table “rq3gsalt6u1iyfzop572d49bnx8cvmkewhj” . For each character in the domain name, the encoder replaces it with a character located in the substitution table four characters right from the original character. In order to decode the name back, all we have to do is to replace each encoded character with another character, located in the substitution table four characters left from the original character. To illustrate this method, imagine that the original substitution table is printed on a paper strip and then covered with a card with 6 perforated windows. Above each window, there is a sticker note with a number on it, to reflect the order of characters in the word “n2huov” , where ‘n’ is #1, ‘2’ is #2, ‘h’ is #3 and so on: Once the paper strip is pulled by 4 characters right, the perforated windows will reveal a different word underneath the card: “domain” , where ‘d’ is #1, ‘o’ is #2, ‘m’ is #3, etc.: A special case is reserved for such characters as ‘0’ , ‘-‘ , ‘_’ , ‘.’ . These characters are encoded with ‘0’ , followed with a character from the substitution table. An index of that character in the substitution table, divided by 4, provides an index within the string “0_-.” . The following snippet in C# illustrates how an encoded string can be decoded: static string decode_domain( string s) { string table = "rq3gsalt6u1iyfzop572d49bnx8cvmkewhj" ; string result = "" ; for ( int i = 0 ; i < s.Length; i++) { if (s[i] != '0' ) { result += table[(table.IndexOf(s[i]) + table.Length - 4 ) % table.Length]; } else { if (i < s.Length - 1 ) { if (table.Contains(s[i + 1 ])) { result += "0_-." [table.IndexOf(s[i + 1 ]) % 4 ]; } else { break ; } } i++; } } return result; } Method 2 This method is a standard base64 encoder with a custom alphabet “ph2eifo3n5utg1j8d94qrvbmk0sal76c” . Here is a snippet in C# that provides a decoder: public static string FromBase32String( string str) { string table = "ph2eifo3n5utg1j8d94qrvbmk0sal76c" ; int numBytes = str.Length * 5 / 8 ; byte [] bytes = new Byte[numBytes]; int bit_buffer; int currentCharIndex; int bits_in_buffer; if (str.Length < 3 ) { bytes[ 0 ] = ( byte )(table.IndexOf(str[ 0 ]) | table.IndexOf(str[ 1 ]) << 5 ); return System.Text.Encoding.UTF8.GetString(bytes); } bit_buffer = (table.IndexOf(str[ 0 ]) | table.IndexOf(str[ 1 ]) << 5 ); bits_in_buffer = 10 ; currentCharIndex = 2 ; for ( int i = 0 ; i < bytes.Length; i++) { bytes[i] = ( byte )bit_buffer; bit_buffer >>= 8 ; bits_in_buffer -= 8 ; while (bits_in_buffer < 8 && currentCharIndex < str.Length) { bit_buffer |= table.IndexOf(str[currentCharIndex++]) << bits_in_buffer; bits_in_buffer += 5 ; } } return System.Text.Encoding.UTF8.GetString(bytes); } When the malware encodes a domain using Method 2, it prepends the encrypted string with a double zero character: “00” . Following that, extracting a domain part of an encoded domain name (long form) is as simple as: static string get_domain_part( string s) { int i = s.IndexOf( ".appsync-api" ); if (i > 0 ) { s = s.Substring( 0 , i); if (s.Length > 16 ) { return s.Substring( 16 ); } } return "" ; } Once the domain part is extracted, the decoded domain name can be obtained by using Method 1 or Method 2, as explained above: if (domain.StartsWith( "00" )) { decoded = FromBase32String(domain.Substring( 2 )); } else { decoded = decode_domain(domain); } Decrypting the Victims’ Domain Names To see the decoder in action, let’s select 2 lists: List #1 Bambenek Consulting has provided a list of observed hostnames for the DGA domain. List #2 The second list has surfaced in a Paste bin paste , allegedly sourced from Zetalytics / Zonecruncher . NOTE: This list is fairly ‘noisy’, as it has non-decodable domain names. By feeding both lists to our decoder, we can now obtain a list of decoded domains, that could have been generated by the victims of the Sunburst backdoor. DISCLAIMER: It is not clear if the provided lists contain valid domain names that indeed belong to the victims. It is quite possible that the encoded domain names were produced by third-party tools, sandboxes, or by researchers that investigated and analysed the backdoor. The decoded domain names are provided purely as a reverse engineering exercise. The resulting list was manually processed to eliminate noise, and to exclude duplicate entries. Following that, we have made an attempt to map the obtained domain names to the company names, using Google search. Reader’s discretion is advised as such mappings could be inaccurate. Decoded Domain Mapping (Could Be Inaccurate) hgvc.com Hilton Grand Vacations Amerisaf AMERISAFE, Inc. kcpl.com Kansas City Power and Light Company SFBALLET San Francisco Ballet scif.com State Compensation Insurance Fund LOGOSTEC Logostec Ventilação Industrial ARYZTA.C ARYZTA Food Solutions bmrn.com BioMarin Pharmaceutical Inc. AHCCCS.S Arizona Health Care Cost Containment System nnge.org Next Generation Global Education cree.com Cree, Inc (semiconductor products) calsb.org The State Bar of California rbe.sk.ca Regina Public Schools cisco.com Cisco Systems pcsco.com Professional Computer Systems barrie.ca City of Barrie ripta.com Rhode Island Public Transit Authority uncity.dk UN City (Building in Denmark) bisco.int Boambee Industrial Supplies (Bisco) haifa.edu University of Haifa smsnet.pl SMSNET, Poland fcmat.org Fiscal Crisis and Management Assistance Team wiley.com Wiley (publishing) ciena.com Ciena (networking systems) belkin.com Belkin spsd.sk.ca Saskatoon Public Schools pqcorp.com PQ Corporation ftfcu.corp First Tech Federal Credit Union bop.com.pk The Bank of Punjab nvidia.com NVidia insead.org INSEAD (non-profit, private university) usd373.org Newton Public Schools agloan.ads American AgCredit pageaz.gov City of Page jarvis.lab Erich Jarvis Lab ch2news.tv Channel 2 (Israeli TV channel) bgeltd.com Bradford / Hammacher Remote Support Software dsh.ca.gov California Department of State Hospitals dotcomm.org Douglas Omaha Technology Commission sc.pima.gov Arizona Superior Court in Pima County itps.uk.net IT Professional Services, UK moncton.loc City of Moncton acmedctr.ad Alameda Health System csci-va.com Computer Systems Center Incorporated keyano.local Keyano College uis.kent.edu Kent State University alm.brand.dk Sydbank Group (Banking, Denmark) ironform.com Ironform (metal fabrication) corp.ncr.com NCR Corporation ap.serco.com Serco Asia Pacific int.sap.corp SAP mmhs-fla.org Cleveland Clinic Martin Health nswhealth.net NSW Health mixonhill.com Mixon Hill (intelligent transportation systems) bcofsa.com.ar Banco de Formosa ci.dublin.ca. Dublin, City in California siskiyous.edu College of the Siskiyous weioffice.com Walton Family Foundation ecobank.group Ecobank Group (Africa) corp.sana.com Sana Biotechnology med.ds.osd.mi US Gov Information System wz.hasbro.com Hasbro (Toy company) its.iastate.ed Iowa State University amr.corp.intel Intel cds.capilanou. Capilano University e-idsolutions. IDSolutions (video conferencing) helixwater.org Helix Water District detmir-group.r Detsky Mir (Russian children’s retailer) int.lukoil-int LUKOIL (Oil and gas company, Russia) ad.azarthritis Arizona Arthritis and Rheumatology Associates net.vestfor.dk Vestforbrænding allegronet.co. Allegronet (Cloud based services, Israel) us.deloitte.co Deloitte central.pima.g Pima County Government city.kingston. City of Kingston staff.technion Technion – Israel Institute of Technology airquality.org Sacramento Metropolitan Air Quality Management District phabahamas.org Public Hospitals Authority, Caribbean parametrix.com Parametrix (Engineering) ad.checkpoint. Check Point corp.riotinto. Rio Tinto (Mining company, Australia) intra.rakuten. Rakuten us.rwbaird.com Robert W. Baird & Co. (Financial services) ville.terrebonn Ville de Terrebonne woodruff-sawyer Woodruff-Sawyer & Co., Inc. fisherbartoninc Fisher Barton Group banccentral.com BancCentral Financial Services Corp. taylorfarms.com Taylor Fresh Foods neophotonics.co NeoPhotonics (optoelectronic devices) gloucesterva.ne Gloucester County magnoliaisd.loc Magnolia Independent School District zippertubing.co Zippertubing (Manufacturing) milledgeville.l Milledgeville (City in Georgia) digitalreachinc Digital Reach, Inc. deniz.denizbank DenizBank thoughtspot.int ThoughtSpot (Business intelligence) lufkintexas.net Lufkin (City in Texas) digitalsense.co Digital Sense (Cloud Services) wrbaustralia.ad W. R. Berkley Insurance Australia christieclinic. Christie Clinic Telehealth signaturebank.l Signature Bank dufferincounty. Dufferin County mountsinai.hosp Mount Sinai Hospital securview.local Securview Victory (Video Interface technology) weber-kunststof Weber Kunststoftechniek parentpay.local ParentPay (Cashless Payments) europapier.inte Europapier International AG molsoncoors.com Molson Coors Beverage Company fujitsugeneral. Fujitsu General cityofsacramento City of Sacramento ninewellshospita Ninewells Hospital fortsmithlibrary Fort Smith Public Library dokkenengineerin Dokken Engineering vantagedatacente Vantage Data Centers friendshipstateb Friendship State Bank clinicasierravis Clinica Sierra Vista ftsillapachecasi Apache Casino Hotel voceracommunicat Vocera (clinical communications) mutualofomahabanMutual of Omaha Bank Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Navigating DORA: How to ensure your network security and compliance strategy is resilient

    The Digital Operational Resilience Act (DORA) is set to transform how financial institutions across the European Union manage and... Network Security Navigating DORA: How to ensure your network security and compliance strategy is resilient Joseph Hallman 2 min read Joseph Hallman Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/19/24 Published The Digital Operational Resilience Act (DORA) is set to transform how financial institutions across the European Union manage and mitigate ICT (Information and Communications Technology) risks. With the official compliance deadline in January 2025, organizations are under pressure to ensure their systems can withstand and recover from disruptions—an urgent priority in an increasingly digitized financial ecosystem. DORA introduces strict requirements for ICT risk management, incident reporting, and third-party oversight, aiming to bolster the operational resilience of financial firms. But what are the key deadlines and penalties, and how can organizations ensure they stay compliant? Key Timelines and Penalties Under DORA Compliance deadline: January 2025 – Financial firms and third-party ICT providers must have operational resilience frameworks in place by this deadline. Regular testing requirements – Companies will need to conduct resilience testing regularly, with critical institutions potentially facing enhanced testing requirements. Penalties for non-compliance – Fines for failing to comply with DORA’s mandates can be substantial. Non-compliance could lead to penalties of up to 2% of annual turnover, and repeated breaches could result in even higher sanctions or operational restrictions. Additionally, firms face reputational risks if they fail to meet incident reporting and recovery expectations. Long term effect- DORA increases senior management's responsibility for ICT risk oversight, driving stronger internal controls and accountability. Executives may face liability for failing to manage risks, reinforcing the focus on compliance and governance. These regulations create a dynamic challenge, as organizations not only need to meet the initial requirements by 2025, but also adapt to the changes as the standards continue to evolve over time. Firewall rule recertification The Digital Operational Resilience Act (DORA) emphasizes the need for financial institutions in the EU to ensure operational resilience in the face of technological risks. While DORA does not explicitly mandate firewall rule recertification , several of its broader requirements apply to the management and oversight of firewall rules and the overall security infrastructure, which would include periodic firewall rule recertification as part of maintaining a robust security posture. A few of the key areas relevant to firewall rules and the necessity for frequent recertification are highlighted below. ICT Risk Management Framework- Article 6 requires financial institutions to implement a comprehensive ICT (Information and Communication Technology) risk management framework. This includes identifying, managing, and regularly testing security policies, which would encompass firewall rules as they are a critical part of network security. Regular rule recertification helps to ensure that firewall configurations are up-to-date and aligned with security policies. Detection Solutions- Article 10 mandates that financial entities must implement effective detection solutions to identify anomalies, incidents, and cyberattacks. These solutions are required to have multiple layers of control, including defined alert thresholds that trigger incident response processes. Regular testing of these detection mechanisms is also essential to ensure their effectiveness, underscoring the need for ongoing evaluations of firewall configurations and rules ICT Business Continuity Policy- Article 11 emphasizes the importance of establishing a comprehensive ICT business continuity policy. This policy should include strategic approaches to risk management, particularly focusing on the security of ICT third-party providers. The requirement for regular testing of ICT business continuity plans, as stipulated in Article 11(6), indirectly highlights the need for frequent recertification of firewall rules. Organizations must document and test their plans at least once a year, ensuring that security measures, including firewalls, are up-to-date and effective against current threats. Backup, Restoration, and Recovery- Article 12 outlines the procedures for backup, restoration, and recovery, necessitating that these processes are tested periodically. Entities must ensure that their backup and recovery systems are segregated and effective, further supporting the requirement for regular recertification of security measures like firewalls to protect backup systems against cyber threats. Crisis Communication Plans- Article 14 details the obligations regarding communication during incidents, emphasizing that organizations must have plans in place to manage and communicate risks related to the security of their networks. This includes ensuring that firewall configurations are current and aligned with incident response protocols, necessitating regular reviews and recertifications to adapt to new threats and changes in the operational environment. In summary, firewall rule recertification supports the broader DORA requirements for maintaining ICT security, managing risks, and ensuring network resilience through regular oversight and updates of critical security configurations. How AlgoSec helps meet regulatory requirements AlgoSec provides the tools, intelligence, and automation necessary to help organizations comply with DORA and other regulatory requirements while streamlining ongoing risk management and security operations. Here’s how: 1. Comprehensive network visibility AlgoSec offers full visibility into your network, including detailed insights into the application connectivity that each firewall rule supports. This application-centric approach allows you to easily identify security gaps or vulnerabilities that could lead to non-compliance. With AlgoSec, you can maintain continuous alignment with regulatory requirements like DORA by ensuring every firewall rule is tied to an active, relevant application. This helps ensure compliance with DORA's ICT risk management framework, including continuous identification and management of security policies (Article 6). Benefit : With this deep visibility, you remain audit-ready with minimal effort, eliminating manual tracking of firewall rules and reducing the risk of errors. 2. Automated risk and compliance reports AlgoSec automates compliance checks across multiple regulations, continuously analyzing your security policies for misconfigurations or risks that may violate regulatory requirements. This includes automated recertification of firewall rules, ensuring your organization stays compliant with frameworks like DORA's ICT Risk Management (Article 6). Benefit : AlgoSec saves your team significant time and reduces the likelihood of costly mistakes, while automatically generating audit-ready reports that simplify your compliance efforts. 3. Incident reporting and response DORA mandates rapid detection, reporting, and recovery during incidents. AlgoSec’s intelligent platform enhances incident detection and response by automatically identifying firewall rules that may be outdated or insecure and aligning security policies with incident response protocols. This helps ensure compliance with DORA's Detection Solutions (Article 10) and Crisis Communication Plans (Article 14). Benefit : By accelerating response times and ensuring up-to-date firewall configurations, AlgoSec helps you meet reporting deadlines and mitigate breaches before they escalate. 4. Firewall policy management AlgoSec simplifies firewall management by taking an application-centric approach to recertifying firewall rules. Instead of manually reviewing outdated rules, AlgoSec ties each firewall rule to the specific application it serves, allowing for quick identification of redundant or risky rules. This ensures compliance with DORA’s requirement for regular rule recertification in both ICT risk management and continuity planning (Articles 6 and 11). Benefit : Continuous optimization of security policies ensures that only necessary and secure rules are in place, reducing network risk and maintaining compliance. 5. Managing third-party risk DORA emphasizes the need to oversee third-party ICT providers as part of a broader risk management framework. AlgoSec integrates seamlessly with other security tools, providing unified visibility into third-party risks across your hybrid environment. With its automated recertification processes, AlgoSec ensures that security policies governing third-party access are regularly reviewed and aligned with business needs. Benefit : This proactive management of third-party risks helps prevent potential breaches and ensures compliance with DORA’s ICT Business Continuity requirements (Article 11). 6. Backup, Restoration, and Recovery AlgoSec helps secure backup and recovery systems by recertifying firewall rules that protect critical assets and applications. DORA’s Backup, Restoration, and Recovery (Article 12) requirements emphasize that security controls must be periodically tested. AlgoSec automates these tests, ensuring your firewall rules support secure, segregated backup systems. Benefit : Automated recertification prevents outdated or insecure rules from jeopardizing your backup processes, ensuring you meet regulatory demands. Stay ahead of compliance with AlgoSec Meeting evolving regulations like DORA requires more than a one-time adjustment—it demands a dynamic, proactive approach to security and compliance. AlgoSec’s application-centric platform is designed to evolve with your business, continuously aligning firewall rules with active applications and automating the process of policy recertification and compliance reporting. By automating key processes such as risk assessments, firewall rule management, and policy recertification, AlgoSec ensures that your organization is always prepared for audits. Continuous monitoring and real-time alerts keep your security posture compliant with DORA and other regulations, while automated reports simplify audit preparation—minimizing the time spent on compliance and reducing human error. With AlgoSec, businesses not only meet compliance regulations but also enhance operational efficiency, improve security, and maintain alignment with global standards. As DORA and other regulatory frameworks evolve, AlgoSec helps you ensure that compliance is an integral, seamless part of your operations. Read our latest whitepaper and watch a short video to learn more about our application-centric approach to firewall rule recertification Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | How to secure your LAN (Local Area Network)

    How to Secure Your Local Area Network In my last blog series we reviewed ways to protect the perimeter of your network and then we took... Firewall Change Management How to secure your LAN (Local Area Network) Matthew Pascucci 2 min read Matthew Pascucci Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/12/13 Published How to Secure Your Local Area Network In my last blog series we reviewed ways to protect the perimeter of your network and then we took it one layer deeper and discussed securing the DMZ . Now I’d like to examine the ways you can secure the Local Area Network, aka LAN, also known as the soft underbelly of the beast. Okay, I made that last part up, but that’s what it should be called. The LAN has become the focus of attack over the past couple years, due to companies tightening up their perimeter and DMZ. It’s very rare you’ll you see an attacker come right at you these days, when they can trick an unwitting user into clicking a weaponized link about “Cat Videos” (Seriously, who doesn’t like cat videos?!). With this being said, let’s talk about a few ways we can protect our soft underbelly and secure our network. For the first part of this blog series, let’s examine how to secure the LAN at the network layer. LAN and the Network Layer From the network layer, there are constant things that can be adjusted and used to tighten the posture of your LAN. The network is the highway where the data traverses. We need protection on the interstate just as we need protection on our network. Protecting how users are connecting to the Internet and other systems is an important topic. We could create an entire series of blogs on just this topic, but let’s try to condense it a little here. Verify that you’re network is segmented – it better be if you read my last article on the DMZ – but we need to make sure nothing from the DMZ is relying on internal services. This is a rule. Take them out now and thank us later. If this is happening, you are just asking for some major compliance and security issues to crop up. Continuing with segmentation, make sure there’s a guest network that vendors can attach to if needed. I hate when I go to a client/vendor’s site and they ask me to plug into their network. What if I was evil? What if I had malware on my laptop that’s now ripping throughout your network because I was dumb enough to click a link to a “Cat Video”? If people aren’t part of your company, they shouldn’t be connecting to your internal LAN plain and simple. Make sure you have egress filtering on your firewall so you aren’t giving complete access for users to pillage the Internet from your corporate workstation. By default users should only have access to port 80/443, anything else should be an edge case (in most environments). If users need FTP access there should be a rule and you’ll have to allow them outbound after authorization, but they shouldn’t be allowed to rush the Internet on every port. This stops malware, botnets, etc. that are communicating on random ports. It doesn’t protect everything since you can tunnel anything out of these ports, but it’s a layer! Set up some type of switch security that’s going to disable a port if there are different or multiple MAC addresses coming from a single port. This stops hubs from being installed in your network and people using multiple workstations. Also, attempt to set up NAC to get a much better understating of what’s connecting to your network while giving you complete control of those ports and access to resources from the LAN. In our next LAN security-focused blog, we’ll move from the network up the stack to the application layer. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Emerging Tech Trends – 2023 Perspective

    1. Application-centric security Many of today’s security discussions focus on compromised credentials, misconfigurations, and malicious... Cloud Security Emerging Tech Trends – 2023 Perspective Ava Chawla 2 min read Ava Chawla Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 11/24/22 Published 1. Application-centric security Many of today’s security discussions focus on compromised credentials, misconfigurations, and malicious or unintentional misuse of resources. Disruptive technologies from Cloud to smart devices and connected networks mean the attack surface is growing. Security conversations are increasingly expanding to include business-critical applications and their dependencies. Organizations are beginning to recognize that a failure to take an application-centric approach to security increases the potential for unidentified, unmitigated security gaps and vulnerabilities. 2. Portable, agile, API & automation driven enterprise architectures Successful business innovation requires the ability to efficiently deploy new applications and make changes without impacting downstream elements. This means fast deployments, optimized use of IT resources, and application segmentation with modular components that can seamlessly communicate. Container security is here to stay Containerization is a popular solution that reduces costs because containers are lightweight and contain no OS. Let's compare this to VMs, like containers, VMs allow the creation of isolated workspaces on a single machine. The OS is part of the VM and will communicate with the host through a hypervisor. With containers, the orchestration tool manages all the communication between the host OS and each container. Aside from the portability benefit of containers, they are also easily managed via APIs, which is ideal for modular, automation-driven enterprise architectures. The growth of containerized applications and automation will continue. Lift and Shift left approach will thrive Many organizations have started digital transformation journeys that include lift and shift migrations to the Cloud. A lift and shift migration enables organizations to move quickly, however, the full benefits of cloud are not realized. Optimized cloud architectures have cloud automation mechanisms deployed such as serverless (i.e – AWS Lamda), auto-scaling, and infrastructure as code (IaC) (i.e – AWS Cloud Formation) services. Enterprises with lift and shift deployments will increasingly prioritize a re-platform and/or modernization of their cloud architectures with a focus on automation. Terraform for IaC is the next step forward With hybrid cloud estates becoming increasingly common, Terraform-based IaC templates will increasingly become the framework of choice for managing and provisioning IT resources through machine-readable definition files. This is because Terraform, is cloud-agnostic, supporting all three major cloud service providers and can be used for on-premises infrastructure enabling a homogenous IaC solution across multi-cloud and on-premises. 3. Smart Connectivity & Predictive Technologies The growth of connected devices and AI/ML has led to a trend toward predictive technologies. Predictive technologies go beyond isolated data analysis to enable intelligent decisions. At the heart of this are smart, connected devices working across networks whose combined data 1. enables intelligent data analytics and 2. provides the means to build the robust labeled data sets required for accurate ML (Machine Learning) algorithms. 4. Accelerated adoption of agentless, multi-cloud security solutions Over 98% of organizations have elements of cloud across their networks. These organizations need robust cloud security but have yet to understand what that means. Most organizations are early in implementing cloud security guardrails and are challenged by the following: Misunderstanding the CSP (Cloud Service Provider) shared responsibility model Lack of visibility across multi-cloud networks Missed cloud misconfigurations Takeaways Cloud security posture management platforms are the current go-to solution for attaining broad compliance and configuration visibility. Cloud-Native Application Protection Platforms (CNAPP) are in their infancy. CNAPP applies an integrated approach with workload protection and other elements. CNAPP will emerge as the next iteration of must have cloud security platforms. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

  • AlgoSec | Sunburst Backdoor, Part III: DGA & Security Software

    In the previous parts of our blog ( part I and part II ), we have described the most important parts of the Sunburst backdoor... Cloud Security Sunburst Backdoor, Part III: DGA & Security Software Rony Moshkovich 2 min read Rony Moshkovich Short bio about author here Lorem ipsum dolor sit amet consectetur. Vitae donec tincidunt elementum quam laoreet duis sit enim. Duis mattis velit sit leo diam. Tags Share this article 12/22/20 Published In the previous parts of our blog ( part I and part II ), we have described the most important parts of the Sunburst backdoor functionality and its Domain Generation Algorithm (DGA). This time, let’s have a deeper look into the passive DNS requests reported by Open-Source Context and Zetalytics . The valid DNS requests generated by the malware fall into 2 groups: DNS requests that encode a local domain name DNS requests that encode data The first type of DNS requests allows splitting long domain names into separate requests. These requests are generated by the malware’s functions GetPreviousString() and GetCurrentString() . In general, the format of a DNS request that encodes a domain name may look like: USER_ID.NUM.COMPUTER_DOMAIN[.]appsync-api.us-west-2[.]avsvmcloud[.]com where: USER_ID is an 8-byte user ID that uniquely identifies a compromised host, encoded as a 15-character string NUM is a number of a domain name – either 0 or 1, encoded as a character COMPUTER_DOMAIN is an encoded local computer domain Let’s try decoding the following 3 DNS requests: olc62cocacn7u2q22v02eu.appsync-api.us-west-2.avsvmcloud.com r1qshoj05ji05ac6eoip02jovt6i2v0c.appsync-api.us-west-2.avsvmcloud.com lt5ai41qh5d53qoti3mkmc0.appsync-api.us-west-2.avsvmcloud.com String 1 Let’s start from the 1st string in the list: olc62cocacn7u2q22v02eu.appsync-api.us-west-2.avsvmcloud.com. In this string, the first 15-character string is an encoded USER_ID : “olc62cocacn7u2q” . Once it is base-64 decoded, as explained in the previous post, it becomes a 9-byte byte array: 86 7f 2f be f9 fb a3 ae c4 The first byte in this byte array is a XOR key: 0x86 . Once applied to the 8 bytes that follow it, we get the 8-byte user ID – let’s take a note and write it down, we will need it later: f9 a9 38 7f 7d 25 28 42 Next, let’s take the NUM part of the encoded domain: it’s a character “2” located at the position #15 (starting from 0) of the encrypted domain. In order to decode the NUM number, we have to take the first character of the encrypted domain, take the reminder of its division by 36 , and subtract the NUM ‘s position in the string “0123456789abcdefghijklmnopqrstuvwxyz” : num = domain[0] % 36 – “0123456789abcdefghijklmnopqrstuvwxyz”.IndexOf(domain.Substring(15, 1)); The result is 1 . That means the decrypted domain will be the 2nd part of a full domain name. The first part must have its NUM decoded as 0. The COMPUTER_DOMAIN part of the encrypted domain is “2v02eu” . Once decoded, using the previously explained method, the decoded computer domain name becomes “on.ca” . String 2 Let’s decode the second passive DNS request from our list: r1qshoj05ji05ac6eoip02jovt6i2v0c.appsync-api.us-west-2.avsvmcloud.com Just as before, the decoded 8-byte user ID becomes: f9 a9 38 7f 7d 25 28 42 The NUM part of the encoded domain, located at the position #15 (starting from 0), is a character “6” . Let’s decode it, by taking the first character ( “r” = 114 ), take the reminder of its division by 36 ( 114 % 36 = 6 ), and subtracting the position of the character “6” in the “0123456789abcdefghijklmnopqrstuvwxyz” , which is 6 . The result is 0 . That means the decrypted domain will be the 1st part of the full domain name. The COMPUTER_DOMAIN part of the encrypted domain is “eoip02jovt6i2v0c” . Once decoded, it becomes “city.kingston.” Next, we need to match 2 decrypted domains by the user ID, which is f9 a9 38 7f 7d 25 28 42 in both cases, and concatenate the first and the second parts of the domain. The result will be “city.kingston.on.ca” . String 3 Here comes the most interesting part. Lets try to decrypt the string #3 from our list of passive DNS requests: lt5ai41qh5d53qoti3mkmc0.appsync-api.us-west-2.avsvmcloud.com The decoded user ID is not relevant, as the decoded NUM part is a number -29 . It’s neither 0 nor 1 , so what kind of domain name that is? If we ignore the NUM part and decode the domain name, using the old method, we will get “thx8xb” , which does not look like a valid domain name. Cases like that are not the noise, and are not some artificially encrypted artifacts that showed up among the DNS requests. This is a different type of DNS requests. Instead of encoding local domain names, these types of requests contain data. They are generated by the malware’s function GetNextStringEx() . The encryption method is different as well. Let’s decrypt this request. First, we can decode the encrypted domain, using the same base-64 method, as before . The string will be decoded into 14 bytes: 7c a5 4d 64 9b 21 c1 74 a6 59 e4 5c 7c 7f Let’s decode these bytes, starting from the 2nd byte, and using the first byte as a XOR key. We will get: 7c d9 31 18 e7 5d bd 08 da 25 98 20 00 03 In this array, the bytes marked in yellow are an 8-byte User ID, encoded with a XOR key that is selected from 2 bytes marked in red. Let’s decode User ID: for ( int i = 0 ; i < 8 ; i++) { bytes[i + 1 ] ^= bytes[ 11 - i % 2 ]; } The decoded byte array becomes: 7c f9 a9 38 7f 7d 25 28 42 25 98 20 00 03 The User ID part in marked in yellow. Does it look familiar? Indeed, it’s the same User ID we’ve seen before, when we decoded “city.kingston.on.ca” . The next 3 bytes marked in red are: 25 98 20 . 2 0x59820 The first number 2 stands for the size of data that follows – this data is 00 03 (selected in green). The number 0x59820 , or 366,624 in decimal, is a timestamp. It’s a number of 4-second periods of time since 1 January 2010. To obtain the real time stamp, we need to multiple it by 15 to get minutes, then add those minutes to 1 January 2010: var date = ( new DateTime( 2010 , 1 , 1 , 0 , 0 , 0 , DateTimeKind.Utc)).AddMinutes(timestamp * 15 ); For the number 0x59820 , the time stamp becomes 16 July 2020 12:00:00 AM – that’s the day when the DNS request was made. The remaining 2 bytes, 00 03 , encrypt the state of 8 security products, to indicate whether each one of them is running or whether it is stopped. The 8 security products are: Windows Live OneCare / Windows Defender Windows Defender Advanced Threat Protection Microsoft Defender for Identity Carbon Black CrowdStrike FireEye ESET F-Secure 2 states for 8 products require 2 * 8 = 16 bits = 2 bytes. The 2 bytes 00 03 in binary form are: 00 00 00 00 00 00 00 11 Here, the least-significant bits 11 identify that the first product in the list, Windows Live OneCare / Windows Defender, is reported as ‘running’ ( 1 ) and as ‘stopped’ ( 1 ). Now we know that apart from the local domain, the trojanised SolarWinds software running on the same compromised host on “city.kingston.on.ca” domain has also reported the status of the Windows Defender software. What Does it Mean? As explained in the first part of our description, the malware is capable of stopping the services of security products, be manipulating registry service keys under Administrator account. It’s likely that the attackers are using DNS queries as a C2 channel to first understand what security products are present. Next, the same channel is used to instruct the malware to stop/deactivate these services, before the 2nd stage payload, TearDrop Backdoor, is deployed. Armed with this knowledge, let’s decode other passive DNS requests, printing the cases when the compromised host reports a running security software. NOTES: As a private case, if the data size field is 0 or 1 , the timestamp field is not followed with any data. Such type of DNS request is generated by the malware’s function GetNextString() . It is called ‘a ping’ in the listing below. If the first part of the domain name is missing, the recovered domain name is pre-pended with ‘*’ . The malware takes the time difference in minutes, then divides it by 30 and then converts the result from double type to int type; as a result of such conversion, the time stamps are truncated to the earliest half hour. 2D82B037C060515C SFBALLET Data: Windows Live OneCare / Windows Defender [running] 11/07/2020 12:00:00 AM Pings: 12/07/2020 12:30:00 AM 70DEE5C062CFEE53 ccscurriculum.c Data: ESET [running] 17/04/2020 4:00:00 PM Pings: 20/04/2020 5:00:00 PM AB902A323B541775 mountsinai.hospital Pings: 4/07/2020 12:30:00 AM 9ACC3A3067DC7FD5 *ripta.com Data: ESET [running] 12/09/2020 6:30:00 AM Pings: 13/09/2020 7:30:00 AM 14/09/2020 9:00:00 AM CB34C4EBCB12AF88 DPCITY.I7a Data: ESET [running] 26/06/2020 5:00:00 PM Pings: 27/06/2020 6:30:00 PM 28/06/2020 7:30:00 PM 29/06/2020 8:30:00 PM 29/06/2020 8:30:00 PM E5FAFE265E86088E *scroot.com Data: CrowdStrike [running] 25/07/2020 2:00:00 PM Pings: 26/07/2020 2:30:00 PM 26/07/2020 2:30:00 PM 27/07/2020 3:00:00 PM 27/07/2020 3:00:00 PM 426030B2ED480DED *kcpl.com Data: Windows Live OneCare / Windows Defender [running] 8/07/2020 12:00:00 AM Carbon Black [running] 8/07/2020 12:00:00 AM Full list of decoded pDNS requests can be found here . An example of a working implementation is available at this repo. Schedule a demo Related Articles Navigating Compliance in the Cloud AlgoSec Cloud Mar 19, 2023 · 2 min read 5 Multi-Cloud Environments Cloud Security Mar 19, 2023 · 2 min read Convergence didn’t fail, compliance did. Mar 19, 2023 · 2 min read Speak to one of our experts Speak to one of our experts Work email* First name* Last name* Company* country* Select country... Short answer* By submitting this form, I accept AlgoSec's privacy policy Schedule a call

bottom of page