Palo Alto Networks CTO Talks Securing ‘Code to Cloud’


Palo Alto Networks CTO Talks Securing ‘Code to Cloud’
Image: Timon/Adobe Stock

Palo Alto Networks held its annual Code to Cloud Cybersecurity Summit Thursday, focusing on cloud, DevOps and security. Experts discussed trends, opportunities and challenges with coding and the cloud.

Recently, Palo Alto Networks’ Unit 42 issued a cloud threat report finding that the average security team takes six days to resolve a security alert. Its State of Cloud-Native Security Survey revealed 90% of organizations cannot detect, contain and resolve cyberthreats within an hour. Unit 42 also recently published new API threat research, which found that 14.9% of attacks in late 2022 targeted cloud-hosted deployments.

Among the speakers at the event was Ory Segal, chief technology officer at Palo Alto Networks Prisma Cloud, who joined a panel on how cloud security can be aligned with the aggressive development cycle under which developers work.

Prior to the event, he spoke to TechRepublic about defending the software development process and cloud-native application platforms (CNAPP). (Figure A)

Figure A

Ory Segal, chief technology officer at Palo Alto Networks.
Ory Segal, chief technology officer at Palo Alto Networks.

Jump to:

CNAPP as a platform

TR: What constitutes a CNAPP (cloud-native application protection platform) now? What falls under that banner, and how do you untangle the different approaches to it when it comes to DevOps security, when it comes to … [reducing] vulnerabilities in applications lifted to the cloud or written for cloud environments?

Segal: Different companies get to the point where they can be considered CNAPPs based on their journey. Some started from container security, like Twistlock (acquired by Palo Alto Networks) or Aqua security, for example. Some arrived … from cloud security posture management. So it really depends on who you ask. But I like Gartner’s point of view: The emphasis is on holistic cloud native security, so it’s not about “cloud security,” “workload security” or “code security.” It’s about providing a platform that allows you to apply the right types of security controls throughout the development lifecycle, from the moment you start coding to the point in time when you are deployed and monitoring the workloads. And under that fall many, many different categories of products, not all of which would be directly thought of as a part of CNAPP.

TR: What are some good examples of CNAPP within the development cascade or cycle? Is CNAPP a blanket term for any DevSecOps?

Segal: So obviously, scanning infrastructure-as-code templates as you develop software to make sure that you are not embedding any kind of risks or misconfigurations on the left; doing software composition analysis to avoid or prevent the risk [of bad code or vulnerabilities] from getting deployed. Even doing static analysis, something that today we are exploring but are not yet offering, but I think SAST (static application security testing), DAST (dynamic application security testing) and IAST (interactive application security testing), all of which are application security testing in general, are parts of that.

SEE: Sticking to the traditional playbook is a mistake for cloud security (TechRepublic)

TR: And further to the right more toward production?

Segal: And then as you build the product, scanning and securing artifacts, accompanying the process of deployment to the cloud, monitoring and protecting the workloads as they run. And that includes runtime protection, WAF (web application firewall), [application programming interface] security, and things that are more related actually to security operations centers, monitoring the workloads.

Securing the software development pipeline

TR: With all of these applications that fall under CNAPP, is there an area that is not sufficiently addressed by most of the solutions available?

Segal: Yes, on top of that, and something that we are currently exploring as a result of our acquisition of Cider Security — and something that most disregard or haven’t yet thought about — is the security of the CI/CD (continuous integration/continuous development) pipeline itself, which in modern development environments constitutes very sophisticated and complex applications by themselves.

TR: But isn’t the CI/CD pipeline just the beads in the necklace, as it were? What, in concrete terms, is the distinction between the CI/CD pipeline and the step-wise DevOps code-to-cloud processes?

Segal: It’s not the application that you are building for your customers, but rather the application that you are using to build your own software; third-party libraries that you’re bringing in, for example, or if we’re using Jenkins or CircleCI to build code and generate artifacts, are we securing those points as well? Because I can write the most secure cloud-native application and deploy it, but if somebody can somehow tamper with the pipeline itself — with my build and deployment process — all of the security that I’m embedding in my own code is not worthwhile.

TR: Because somebody can just poison the pipeline.

Segal: They can embed malware, as we saw happen to SolarWinds in 2020 and have seen numerous times lately. And so this is something that we’re also now considering a part of CNAPP, even though you won’t often see it described that way.

How the public cloud creates vulnerabilities for CI/CD

TR: How are cloud-based, open-sourced codebases and hybrid work affecting CI/CD?

Segal: The way we used to build software — and I’m not talking about the languages and the frameworks, I’m talking simply about the build process itself — we would run source code management locally, on a server, not even a data center, but our own IT infrastructure. We would pull and push code locally, build and then burn it on a CD and ship it to our customers. Today, most of the organizations that we work with use some kind of GIT repository, completely on the public internet, and using more and more services to do the build. Jenkins, GitLab, CircleCI, for example, most of which are consumed as build-as-a-service platforms.

TR: So, not local in any sense and not protected within a perimeter?

Segal: In essence, the entire workflow is hosted on the public internet to some extent. Additionally, developers often use their own laptops to develop, often accessing their GIT repositories through a browser. And if they happen to receive and respond to a phishing email or other social engineering attack, they would be vulnerable to the actor manipulating them and stealing, for example, session tokens from the browser, which would then give the attacker direct access to the GitHub repository. From there, they can begin to poison the development process. So from the point of view of zero trust, we are exposing the most sensitive points in the way we develop software today, so it’s not very well controlled. So, no, there is no perimeter anymore.

Protecting the supply chain

TR: In terms of protecting the supply chain, going back to other products designed to ensure the hygiene of the CI/CD pipeline, I am aware of products, some open source out there, like in-toto, which assures signatures for every step in the development process, so there are no points left invisible and vulnerable.

Segal: I’ve looked at that project. We recently, a few months ago, acquired a company in Israel, a startup called Cider, that was really a pioneer in this space. And as part of that acquisition, we are creating a new security module that applies security guardrails to the CI/CD pipeline.

TR: What does this do for security teams?

Segal: For a security person, it “turns on the lights,” illuminating the development pipelines, because today IT security application teams are completely out of the loop when it comes to this CI/CD process, due to the fact that we have shifted from a waterfall model to a shipping model, and that means large percentages of our customers are pushing code multiple times a day — or multiple times a week. There’s a lot of competitive stress for teams to develop and push more and more new things every week, so developers are super busy with coding functionality. Even expecting them to use static code analysis is a bit out there. In this paradigm, the IT security or application security teams cannot be the choke points. They cannot be blockers; they must be perceived as assisting.

TR: And what does that mean in practice?

Segal: That means they cannot stop processes to scan each and every code that is being pushed. And they definitely don’t have any visibility into the nature of CI/CD pipelines, or where developers are pushing code to, or what the artifacts and dependencies are or whether or not there are risks, such as whether build-as-a-service plugins have access to code.

TR: By ‘artifacts,’ you mean binaries?

Segal: It could be binaries, container images, serverless function code and even EC2 (Amazon’s cloud computing platform) images. It includes all the third-party packages, packaged usually as images or functions ready to get pushed to the cloud.

Palo Alto Networks Prisma Cloud to enhance CI/CD security

TR: So you are coming out with a Palo Alto Prisma Cloud product specific to securing CI/CD.

Segal: Yes, we’re planning to add a CI/CD security module to the Prisma Cloud platform to help secure the software supply chain. You start by onboarding your cloud accounts, your code repositories, your build processes. And then we start scanning everything. We will scan your code on the left. We will scan those related artifacts — the container images, for example — when they are built, and we will apply runtime protection on the right. And the whole thing is governed and operated by the Cloud Security team, which is responsible for the end-to-end process for everything until you push it to the cloud. It is making sure that the cloud account is secure, making sure that you don’t have any assets with risks being deployed to the cloud.

SEE: Why cloud security has a “forest for trees” problem (TechRepublic)

TR: Obviously, shifting left is paramount because once you have deployed to the cloud flawed or vulnerable codebases, you have created a hydra, right?

Segal: One line of code, for example, in a file that you write, goes into a repository that can generate multiple container images that get deployed into many, many different clusters on multiple cloud accounts. And so if you were to play that kind of whack-a-mole and attack the problem on the right, you would have to go and fix and patch thousands of instances of the same problem.

How Palo Alto Networks avoids the ‘hydra problem’

TR: If you wait until it’s already out there, you are dealing with not one problem, but thousands.

It becomes a disseminated problem. How do you fix that?

Segal: Think about it this way: You make a mistake in the code of a shopping cart functionality in your application, which is now deployed to 5,000 containers that are running redundantly to support the traffic on multiple clouds — Google Cloud, AWS, Azure, whatever — in multiple regions. Now, you get a scanning alert from the runtime side saying you have 5,000 instances that are vulnerable. If your platform is intelligent enough, you can map it all the way back to that bad line of code and that specific code committed by that specific developer. You can open a ticket to that developer to fix the problem and resolve it in those thousands of instances. Also, you will want to prioritize these issues: Let’s say you’re looking at the results at the code level, and you see a thousand problems that you have to fix. How do you know which problem is the most severe? If you now have information from the live environment, you can identify vulnerable code being used in a production mission-critical environment, versus a problem that is only in your staging environment, which is not as severe and is certainly not an imminent threat. These are the kinds of things that a CNAPP allows you, supposedly, to do.

TR: Well, that is critical because it saves a lot of time potentially?

Segal: That’s right, because there are millions of potential dependencies and really you only need to focus on the ones that are relevant. Having that runtime visibility, and not only looking at the static side, is what can make a big difference. In Prisma Cloud, for example, our Cloud Workload Protection registers which software packages are actually loaded into memory in the running containers. And this is gold. This data is exactly what you need in order to know how to prioritize what you want to fix first.



Source link