Video: Integrating Security in AI Code Generation Process | Summary: AI-generated code is now influenced by security systems early in the development process.
Video: Redefining AppSec: Securing Software Supply Chains in the AI-Coding Era | Duration: 3608s | Summary: Redefining AppSec: Securing Software Supply Chains in the AI-Coding Era | Chapters: Redefining Application Security (19.215s), DevSecOps Era Challenges (223.235s), AI-Driven Security Evolution (715.135s), Application Security Challenges (1007.15s), DevSecOps Evolution Challenges (1484.365s), Practical Application Security (1597.2201s), AI and AppSec Future (2045.61s), Advice for Security (2389.1s), Application Security Approach (2587.74s)
Transcript for "Redefining AppSec: Securing Software Supply Chains in the AI-Coding Era":
Good morning, afternoon, or evening, depending on where you are. Welcome to today's session where together we'll be redefining application security. It's quite an ambitious goal in an era where AI is writing the code faster than we can secure it. I think it's fair to say that many of us are feeling the pressure of all these fragmented tools, siloed dashboards, and frankly a backlog of alerts that never seems to shrink, no matter how many scanners, how many point solutions we throw at it. So essentially today, it's no longer about just finding a vulnerability. It's about understanding if that vulnerability is reachable, if that vulnerability is exploitable within a given environment. And during today's webinar, we're going to focus on how you can achieve better coverage and how you can provide remediation that actually fits the natural developer workflows. Now to walk us through this, I have three guests. The cool part is that they all bring a different perspective to this discussion. We have Katie, research manager at IDC. We have Joao, one of our CDOs at Armys. And we have Andrew, chief security officer at VetCorps. I'm going to ask all three of you to just briefly introduce yourself, your role, the company, your workforce, so that everybody on this call today has a bit of a better understanding of the perspective that you bring to this discussion. So maybe Katie, we can start with you. Sure. Hi everyone. I'm Katie Norton. I am a research manager here at IDC, and I lead all of our application security and software supply chain security research. So I spend a lot of time trying to understand the market and the vendors' needs for the tools organizations use to secure their applications. Great. Thanks so much, Katie. What about you, Andrew? Welcome. Thank you. Pleasure to be here. I'm Andrew Wilder. I'm the Chief Security Officer at Vetcor. Vetcor is a veterinary consolidator. We own about 1,000 veterinary hospitals in North America, so trying to protect our people, our pets, our veterinarians. Interesting, I like that mission, thank you. What about you, Alf? Welcome. Thank you, Maxim and team. I'm Joab. I'm part of the leadership team here in Armis focusing on making sure that we continuously deliver to our customers, at least a few of the product lines and some of the incubation here in Armis. So very excited about the next stage of where application security is going. Yeah, this is really cool because we have our CTO vendor perspective, we have a research perspective, we have the end user perspective with Andrew. So this should be fun. Just sharing the agenda for today. After my welcome, I'll pass it over to Katie who is going to give you a bit of a state of the market and the trends that she's seeing specifically when it comes to application security. UAV will do something similar but then more for a technology point of view and that should give us enough stuff to talk about, a good foundation to do to have this roundtable with the four of us before I eventually wrap up and hopefully you walk away with some tangible learnings with something new that you didn't know before you joined this webinar. Katie, the floor is all yours. I'll let you share your presentation. All right. So, you know, application security has always been shaped by how we built software. You know, in the past, when release cycles were centralized, you know, security operated as a final gate. In the present, when development became distributed and continuous, security tools moved into pipelines and workflows. And now, AI assisted development is changing the equation again. Codes generating faster. It's being reviewed differently, and it's being deployed with a less direct human oversight than before. And so yet again, the security model's gonna have to evolve to match. So over the next several minutes, I wanna walk through sort of each of these eras and talk a little bit about what worked, what changed, and and what organizations should really prioritize as they prepare for what comes next. So in the waterfall era, the past, you know, this was characterized by really long release cycles, centralized teams, infrequent deployments. Security functioned as a final assurance step. And, you know, security teams also operated separately from development. You know, these were often AppSec specialists or QA groups or even external testers that reviewed applications only after they were considered complete. From a tooling perspective, testing was manual. It was late stage. SAS and DAS scans were run just before deployment. And it was assumed that there was time at the end of the cycle to address these issues. And so when remediation wasn't feasible, we relied on compensating controls like a WAF to manage our exposures. And the outcomes we saw reflected this structure. Fixes were costly. They delayed deployments. And, you know, security was generally perceived as a bottleneck or a blocker. And while this model provided a lot of control and easy audibility, it really depended on having time at the end of the cycle. And when development moved from agile to continuous delivery, that assumption broke down. So that kinda leads us to where we are now, right, in the present, in the DevSecOps era, where application development, you know, changed significantly. Monolithic applications, they gave way to microservices. Teams became smaller. They're more autonomous and responsible for their own services end to end. Release cycles compressed from quarterly to weekly to now continuous. And you know, the result was that a fundamentally different operating model occurred that's faster, more distributed, more complex. And so application had to adapt, right? Tools moved out of centralized security teams and embedded directly into developer and DevOps workflows. Scope expanded to cover not just application code, but dependencies, APIs, containers, infrastructures code. All of these capabilities, we added them incrementally, often to solve very specific problems for specific teams. I wanna hop away from the slide for a second to look at a little bit of data from my research that illustrates this dynamic. On the left, we see the average coverage across common application security testing tools. Most capabilities, they cluster roughly around half of internally developed applications. That means, like, even foundational tools like SaaS and SCA, they're really not consistently applied across the full portfolio at an average organization. And on the right, you also see tooling decision making changed as well. Developers began to play a really central role. And in a lot of organizations, they really strongly influenced the outcomes around security tools. So when you think about this together, right, it explains a key pattern in the DevSecOps model. Tools are adopted close to developer workflows, which improved relevance, but it often created overlapping coverage and uneven visibility. And so that's where we now saw these aggregation and consolidation layers start to emerge like application security posture management. Not because we lack tools, but because we needed coherence around these decentralized decisions that were happening. So, you know, coming back to where we're at today, as tooling moved closer to development, people and processes shifted too. The mantra, as we all know, became shift left and, you know, move security earlier in the life cycle closer to the developer. And in principle, this made sense. Right? And it makes sense. But in practice, it really added a demanding cognitive load to developers. They're now expected to write secure code, interpret findings from a variety of different tools, be able to prioritize them and remediate. And that's all while delivering features, maintaining reliability, and managing operational complexity. And, you know, security has become one more responsibility absorbed into an already full role. And the outcomes we've gotten really reflect this. We see positive things like security related delays have declined, but the gains are uneven. You know, we still see large vulnerability backlogs. And we've seen an improvement of risk visibility, but it remains inconsistent. And it creates what I see in organizations called a confidence gap. Organizations express to me confidence in their app set capabilities even when coverage and prioritization vary widely and significantly. So ultimately, where we're kinda at right now is this coexistence of both progress and frustration. And I wanna take a beat before we look ahead and address sort of what is this current inflection point. Because, you know, when we look at AI assisted development, it's already changing the security equation, not just the future. And this shift happened fast. In late twenty twenty two, large language models, they became broadly accessible to the average person. Know, tools like ChatGPT moved into everyday use almost immediately. And AI coding assistants like GitHub Copilot appeared, and they were quickly adopted by developers. And what started as curiosity moved way faster, I think, than any security or development model anticipated into routine use. And that's, you know, what the data on this slide shows, how quickly that adoption took hold. On the left, we see a substantial share of enterprise developers now actively use AI coding assistance in their day to day workflow. And as you see in the middle, it shows that impact on output, a meaningful portion of application code is now AI generated and organizations are reporting that roughly one third to one half of their code base comes from AI. That changes the volume and velocity of code creation. And finally, on the right, we see how that code is treated. Developers report a significant share of AI generated code is accepted without revision. And this isn't negligence. It's just reflecting that productivity pressure that AI tools are designed to address. But it's increasing that distance between intent and implementation. And, you know, this matters for application security because risk is now introduced earlier. It's moving faster, faster, and it's reviewed under tighter constraints. And so the volume of code requiring analysis, it's increased. But security team capacity, that has not scaled proportionally. So it's kind of in the context of this that we look at the next evolution of application security and why the operating model needs to shift again. So when we look ahead to the next several years, as application security evolves, it's really shaped by now how this AI driven software creation. And from a tooling perspective, what I see happening is a key shift of where security intervenes. AI coding tools, they don't inherently generate secure code. And so the change now is that security systems are increasingly influencing how AI code generation happens upstream before the code is produced instead of scanning afterwards. Policies, constraints, and contextual controls are applied directly to the AI system itself, shaping its outputs rather than relying on developers or scanners to interpret or enforce those security guidance after the fact. But at the same time, you know, scanning doesn't go away, but its role is changing and it's becoming more AI native itself. AI native scanning is becoming the standard and driven by the ways in which AI can understand application behavior, business logic, exploitability in ways that more static or deterministic rule based scanning can't. With the people in process, the routine security work that's happening is now getting delegated to agents that operate, you know, 20 fourseven. This doesn't remove the humans and their responsibilities, but it changes. Developers, they're not expected to manage security mechanics in this model. Their role centers on expressing intent, reviewing the outcomes, and validating that that, you know, the system or application behaves correctly. Security teams are also going to evolve. Instead of, you know, being the guards that execute the controls, manually tuning these tools, they focus on creating the policies, the guardrails, the risk thresholds that the agents enforce at scale, their work also moves more towards supervision and governance rather than hands on execution. And the outcomes we see are pragmatic, right? They're not perfect. But what we will start to see is that both the use of security at the point of generation plus continuous agent driven remediation, it's gonna help slow both the introduction of new vulnerabilities and reduce backlogs over time. The emphasis will also move away from whether developers know security towards whether AI driven systems are operating within trusted boundaries. And at the same time, you know, be remiss to also mention that new risks are also going to emerge related to AI, including prompt injection, model manipulation, and a variety of other things that expand the attack surface in ways that the controls we have in place now weren't designed to address. So as I land my presentation today and what, you know, this new phase of application security requires, I think there's kind of like three key things. The first is that AI native security that can interpret application behavior, not just match patterns, is going to become, you know, the standard that AI driven code volumes. The value comes from contextual judgment, not from understanding more findings. And secondly, when AI is involved, it operates effectively only with full context. Isolated tools can't reason about risk because they only see fragments. The next phase of AppStack requires platform level integration that connects code, dependencies, pipelines, runtime into a unified risk model. Context becomes the foundation for accurate prioritization and meaningful noise reduction. Third, validation has to become systemic. Developer invoked scans and post generation gates, they can't keep pace with the AI driven output. Security coverage has to default to a continuous passive validation at the platform level. Generation time controls can help shape output, and independent validation can help confirm integrity and and detect drift. So together, all of these shifts sort of redefine what application security is from, like, a tool chain of scanners to a more integrated AI enabled control system that will be able to scale with modern software development rather than trailing behind it. Thank you so much for your valuable research or valuable insights. There's so much we can talk about during the roundtable, but if there's one thing that sticks with me, it's the inflection point that you mentioned. The fact that more code with the same resources is not the way forward. We cannot just throw more scanners, more findings to this. So that inflection point is really important and the way you frame it as needing a platform level context, platform level approach is really kind of talking to what we are doing at Armis as well. So speaking of which Joao, we saw this research lens, but from your perspective, I bet you did your own bit of research when it comes to the technologies. Can you share maybe your perspective as well when it comes to application security and what you've been hearing from both customers and prospects? Yeah, absolutely. I mean, I think, you know, we've been working within the realm of application security for a bunch of years at this point. But specifically in the past few years, we've just heard time and time again how much application security has been a disappointed market, meaning customers are buying solutions, but we haven't found customers that are really reaping the fruits of the application security programs. We haven't found a lot of successful AppSec programs out there. And when we dove into what's their hurdle from a technology and a product perspective and actually making some of these programs successful, it really kind of resembles almost like a very troublesome data journey that we can describe using this slide right here. So for example, we would hear about the constant fragmentation of tooling, lack of visibility, difficulty in supporting all of the different languages that organizations have to rely on, especially for application specific use cases, niche languages, or older code bases. We also ran into a lot of partial or failed deployments of these tools. So it's really hard to actually deploy the scanning capability than security teams live in the dark. And when we work with organizations, can tell you that many times we find out that they have anywhere from two to 10 times more code repositories, for example, than the security team even knew about, because they're just not getting like comprehensive scans in place, or they're only deployed in 10% of the environment and they don't have a good way to actually measure that. The next thing that we've seen is, you know, people complain about false positives and lack of context and also things that aren't necessarily getting detected. What we found is that from a technology approach, pretty much all of the different scanners, especially in SAS that were built in the last twenty years or so, are using this static template engines, almost like a rules engine that is based on a database of rules, let's say 10,000 rules that supports anywhere from 15 to 30 languages. And that's how these scanners all work. These static templates, you know, they're not going to catch every variant of a Python SQL injection. They're also not going to gather context dynamically because every code snippet is different. And so it kind of reminds us of the days of the antivirus signature based detection. Meaning we must have seen some form of an example similar to this in order for us to detect it going forward. And we saw that in the days of antivirus, like that didn't really pan out well, like at the end of the antivirus days where we had to move towards more real time analysis and dynamic analysis of activity and behavior. And so same thing is now happening in application security. We're moving away from the static detection capability, I think to a new world where it's a lot more dynamic. Now, not only are a lot of organizations using static templates or have been using static templates to actually detect code vulnerabilities and application security risks, but the next thing that we hear is that it's really hard to actually prioritize the issue and justify why something needs to be fixed. Developers have to do a lot of investigation. There's a lot of friction as a result of it between security and development. And it's no surprise. I mean, we didn't gather enough context in the first, in the previous step. And we also had some fragmentation in the tooling from the get go. So it's going be really hard to compare all of our issues, make sure they're properly contextualized, investigate them, justify them in order for developers to actually go and fix them. And then we delve into more problems that we see around the customers would often refer to as ASPM, which is really about kind of the management and the process of managing the operations of all the risks that they're detecting. So how do you assign it to the right engineers? How do you track it? How do you communicate it? And so the fragmentation that we started with is going to make all of these steps going forward just more difficult. And then by the end of it, it's no surprise that AppSec is really hard to do because every step is facing difficulties today from a technology perspective. I hope that makes sense, Maxime. Absolutely. When all the pain points that you highlighted, the only thing I can think of is there's gotta be a better way. So let's move to the round table and have a discussion around this. I must say Katie, I really liked your approach of having this framed in kind of a past, present and future structures. So maybe we can try and do something similar for the round table if that's okay. So maybe starting with you Andrew, because you have this, let's say end user experience. When you look at the past, we know that alert fatigue is kind of a massive hurdle when it comes to application security. Can paint a bit of a picture of how you were handling this with your teams how much time you were losing to false positive and all the noise? Just give a bit of an idea of how it was going before you eventually made a change. Yeah, so I mean, I can talk about past moving into present a little bit, I certainly can't talk about the future bit because I don't think anyone has really tackled that yet, but I think the biggest change for us with alert fatigue was the move from a standard kind of a SaaS DaaS approach to a DevSecOps. Every security leader has had the phone call where we're going live with the new app tomorrow and we need a guy from security to sign off on it. And after getting that phone call too many times, I went back to the leadership and I said, we need to change the way that we're working together. We need to be embedded from the beginning because I can't give you a sign off for something that's going live tomorrow because I haven't even seen it, we haven't scanned the code, we don't know how big the problems are. So once we changed to a shift left or a DevSecOps approach, we were integrated from the beginning, we were able to give the developers weekly or daily or whatever updates on here's the biggest problems with your code, here's the things you need to fix before the next release, And that made it so those last minute phone calls stopped happening. So that was a bit of a tipping point, if you will, so that the one phone call too many is like, okay, we can't work this way. We need to shift this to the beginning of the death life cycle. Absolutely. Is that what you guys are hearing as well from a research perspective, Katie? Yeah, I wouldn't say similarly that there really wasn't like an incident, right? That's a tipping point. I think where we're at is this like cumulative realization that more tools and more alerts is not producing materially better security outcomes. You know, for a decade, the dominant model has always been additive. Right? When risk like, when risk increases, we add something new. SaaS for custom code, SCA for open source, IAC, secrets, on and on and on. And, you know, each tool has its purpose. It addresses a really legitimate gap, but none reduced overall complexity on its own. And so while organizations are investing millions of dollars in application security tooling, you know, they're still releasing with known critical vulnerabilities, and they're managing huge backlogs. So that return on investment for scanners, I think we realize has started, you know, or individual point tools has started to diminish. So really, like, you know, detection's not translating necessarily straight into improved security outcomes. So this is where we've sort of moved to this more like contextual connected DevSecOps, you know, all of these models that sort of bring all of these disparate pieces together. Is that what you were hearing from early customers or early, let's say prospects when you were going to the drawing table as well, you have, because essentially we built something new here. Is that kind of based on the same starting point? I think what both Andrew and Katie said is spot on. I think if I had to kind of summarize what we were hearing is that, enterprise application security wasn't really practical. It's not just about scanning and it's not just about detecting, it's how do we operationalize this? How do we actually drive risk down? What does a successful program look like? How do you handle the people element of it? And so everything that we do is all about making things practical for our customers. It's all about, at the end of the day, helping them achieve their objectives. And so I think that's kind of what's been missing. And so going back, going closer to the developers using DevSecOps is one mechanism of that and shifting left in terms of the tooling, flooding the developers with too much noise and just being more accurate and contextual. These are key elements in making it more practical and there's small nuances in every step of the process to make it work. Yeah, and if there are some key learnings or what does it really take from a practical perspective, Andrew, go through this transition? Obviously we'll have people on the webinar who haven't done that yet and probably they're eager to learn from you like from a practical experience. What would you say, what does it take to set up a successful and practical DevSecOps program? So I think the biggest thing is a cultural shift, top down support to say, this is something that we need to do and change because business wants to move fast, right? There's no denying that. And they have to see the benefits of making these kinds of changes. As you move to a DevSecOps approach, you're not necessarily slowing things down, you're just integrating yourself into it from the beginning. So it's really about a cultural shift, a change. If you think about things like IAC, I mean, those are really requiring a cultural shift in the way the developers work, right? They're going to have to go to different repositories, they're going to have to use different tools than they've been used to, so motivating them to want to do that change and to work together, I think those are really important things, and those really require cultural change, human change management, top down support to make those things happen, which are not typically the things when you talk about deploying a cybersecurity tool, you don't talk a lot about human change management, but that's a big part of it. Yeah, did you have a worker's distance to tackle? Like how does that go when you implement a change? I'm sorry, I missed a little bit of the question. Were you dealing with resistance for people sometimes questioning Sure, your sure. So people, they wanna say, look, this is how I've done it, this is how I'm used to doing it, let me work the way that I want to. And so when you start to put guardrails around things, you really need to give people a why and get them to buy into the overall vision that we don't wanna release products with bad code into the wild and those get compromised. Then we looked at it as a company. So it's really about the whole company mission and being able to understand and reduce the risk overall. So we had Andrew on board, which is great, but I can only imagine, Katie, if you talk to other CSOs or other engineering leaders that there's still a lot of misconceptions when it comes to application security, application risks. Can you clarify some of the misconceptions that you hear from the field so that you see in your daily reality? Yeah, some of this I think goes to my point I made earlier and that I think there's a persistent misconception that more findings means more security. Know, for a lot of leaders, they still equate scanning amount of things found with risk reduction. And in practice, what I've seen is that, you know, real risk reduction is driven by a much smaller subset of exploitable, reachable, business critical vulnerabilities. And so what I generally see is that when programs are optimized for detection counts instead of actual material risk reduction, that's where you get this big problem with like noise and big backlogs and developer fatigues. Developers trust nothing less or they trust nothing distrust nothing more. There we go is the words than some than just noise and being given a finding and then being spending a half an hour figuring out what it is and then it not being meaningful and something that actually isn't relevant or a risk. So I think, you know, it's really important that security programs, they become like outcome driven, not just tracking scans and vulnerabilities. Got it. Yeah, I see you nodding you off. You wanted to add something? No, no, I'm in full agreement. I think Andrew and Katie are spot on. Yeah, okay. So adding more tools is definitely not the way forward. More refining is not a strategy as such. But however, when we talk about more tools and we talk about the presence, I think there's no way around it. Are you guys thoughts on those AI native SaaS tools entering the market? Not to call the elephant in the room anthropic. Is this another tool? Is this a fundamentally different approach? I'm really curious because this is hot topics. So I'm kind of curious to hear your take on that. Maybe Joao, you can start first. Sure. Yeah, I think it's super exciting to see kind of the development of new tools. For us, the whole concept of like AI native SaaS is actually not new. I mean, we started working on that about two years ago and we saw the benefits of that right away. And so I'm glad that the market is realizing that more and more. Some of our customers have already been benefiting from that because just the old technological approach, as we mentioned, created deployment difficulties, lack of context, made investigations harder and created friction, and then all sorts of accuracy issues. The new approach supports AI native SaaS, it supports virtually every language if you do it right, It's incredibly more accurate and provides way more context and it can really bridge a lot of the gaps that we spoke about in order to actually make application security a lot more practical and relevant for the developers and the security leaders. In regards to some of the newer tools that we're seeing, we still see that there is a need for kind of the enterprise application security program to actually function correctly. So it's not just about using an AI native SaaS in that sense or new capabilities. Still goes back to how do we make this practical? How does this work in a large organization with many development teams? And how do we consistently make sure that the code we produce is good. And so it's not necessarily changing everything, but it is heavily improving the way that we're going about things. Katie, I don't know if you want to add to that. Yeah, I mean, I think, you know, this last week or so was filled with a lot of noise more than anything to Yoav's point, like AI either enhanced or driven entirely, you know, any sort of testing tools, specifically SaaS, but see it in like DAS pen testing, all sorts of other areas, right? This is just the model. How do we adopt AI to make these tools even better? So I think it was just that some sort of a bit of an existential AppSec crisis. Right? AppSec has been being as a as a market has been been being encroached by other categories for years, right? And now the frontier models are playing in the game and it totally makes sense. So I think ultimately, this is probably good for the detection type because it brings a lot of attention to that. This is how these tools are evolving. So, you know, overall, I think it's a great new, you know, great spotlight on a type of technology. But ultimately, what, you know, is being frontier models have been Anthropic wasn't the first. All the other ones have also done some code security capabilities as well. Yeah. And how do you look at it from your perspective, Andrew? Because there's always something new and exciting or a new new tool, new technology. So how do you go around prioritizing your efforts, your budgets with all these announcements? Yeah, ruthless prioritization, I think is the goal of any CISO. The advent of Vibe coding and application security for every single person instead of just the cultural change management that we talked about of AppSec for developers, which that takes a lot of work to get every single person to do it. It reminds me a little bit of all of the challenge that we have with cloud, right? Everything moved to cloud, people were fast, developers were fast, things were in the cloud, and then we said, oh, wait a second, we need to secure all of that stuff. So then we're going backwards after the fact and trying to fix everything, do it correctly from the beginning, there's a lot less rework that you have to do, but when you go back and fix things after the fact, you can break stuff and your developers are upset. Imagine your Vibe coders like the people in your enterprise who just used one of those LLMs to write the code for this new app, they're not gonna spend any time helping you to fix that thing. So it really behooves us as an industry to do it right from the beginning, to use those tools that we have now to make sure that all of the Vibe coding and coding that's going on in your organization is secure and is scanned now, but how do we do that? How do we make that even bigger cultural change? That's my question. Do you think vibe coding is really that emerging trend or threat that you think the industry is still underestimating, or do you see something else coming down your way? I mean, the change is exponential. I don't know that I would say it's the emerging threat, but it certainly is one. And if you're running an application security program and you're not thinking about five coding, then you need to open your eyes, I would say. Absolutely. Vibe coding, definitely important, but other trends that you see coming your way, is there anything like if we were to look now to talk about the future emerging trends that you see that you think most organizations aren't prepared for just yet? Do you mean in the area of application security and vibe coding threats? Not per se vibe coding, but like application security broader. Do you see anything coming your way that you think the industry is still not prepared for? Know, I'm sure things off the top of my head, I can't think of anything else besides the vibe coding one. That's pretty big to me right now. But I'm sure that probably Katie knows some other new things that are coming down the pipe that we don't know about yet. Yeah, sure. Maybe from a research perspective. So Katie, go ahead. Yeah, I mean, I think it definitely we often talk about and we've touched on here, right? How AI is being used in coding, how AI is being used in security. There's also AI native risks that I kind of touched on, you know, very briefly in my presentation, but but that's a whole another area of risk. And and actually, when people have been sort of, you know, existentially declaring application security is dead this week, a lot of that fails to take into account all of the risk that's being generated by chatbots, by AI agents, by all of these applications that are enhanced with AI that bring with them their own distinct vulnerabilities, as well as operating in a totally different way, right? They're non deterministic and what that means for both how we secure them and what an insecurity is. It just changes fundamentally. So while, you know, things might be changing on how we do security. I think what we have to secure is only growing. Yeah. And so I think we can conclude on the future by saying that AI has become both the code producer, but also an attack surface. And organizations need to really rethink how they go with application security. And in just a minute, Joao is also going to show you how we handle this as armies with our platform approach. But before we do so, I want to make this as practical and as possible for our audience, because as I mentioned many of our listeners might be in the early stages or might be in a different place today. So I'm going to ask each one of you to give one piece of advice. If you could give one piece of advice to another security leader, let's say that is on this call or an engineering leader that is currently drowning in alerts, blind spots, what would that piece of advice be? And maybe we can start with you Andrew. So my piece of advice would be to focus on the real risks that are relevant to you and your organization and focus your resources on addressing those. And if I can give two pieces of advice, I'd saying, start that cultural change now because that takes time and it's worth it. Thank you for the bonus piece of advice, really appreciate it. What about you, Katie? That's great, so mine, I would say, you know, going back to something I said earlier, if you're drowning in alerts, adding yet another tool is not going to save you. I think you need to go as an organization through an effort of looking where you have overlapping capabilities and look for how you can sort of bring these things together contextually and demand that contextual prioritization, that elimination of duplication, before you ask your engineers to fix anything. I often find complexity is often the root cause of the use. So the more that you can streamline your tooling in an organization, the better off you are in terms of making sure that your engineers are getting those real risk prioritized things that matter, like Andrew was pointing to. Yeah, it makes total sense. What about you, you asked, I'm sorry, what would be your golden nuggets here? I'll follow-up on the stuff that we've already mentioned, but the change is exponential. I think the amount of new vulnerabilities that we're generating by coding is exponential. The amount of new AI driven attacks that are coming are exponential. The biggest hurdle that I see that our customers have is that they're too stuck in, they're stuck in budget cycles. And so they can't react fast enough. It prevents them from evaluating new solutions and revisiting it maybe because they're in a three year contract. Like if you're stuck on a three year contract with something that's not working today, like that's in three years, like God knows kind of how bad it's going to get. And so we're seeing very large, very complex organizations that have traditionally moved slower, speed things up because they realize they have no choice. I think that's the right approach. So in resumes, speed things up. Don't forget the cultural change and don't just throw another scanner at it, I would say. We have some time to maybe quickly walk us through the army's approach to this. Joao, are you okay to share your slide on this? Absolutely. So as some of you might have already seen in the news, just a few weeks ago, we formally announced the general availability of one of our newer capabilities and product lines within Armis known as ArmisCentrics for Application Security. Effectively, we built, building up on everything we've talked about on this call, we've built a multi agent AI pipeline that effectively can scan all of our customers' code pretty much within twenty four to forty eight hours, give them full visibility into everything they're doing. We don't cover just one type of risk that organizations need to focus on such as SAST or SCA or infrastructures code. Our goal is to be as comprehensive as possible and provide them all of their scanning needs while still using best in breed technologies. I mean, we have an incredible AI team here in Armis that enables us to actually do that. And so we look at code, SBOM files, container security. We're soon going to be also be able to scan web applications. So with that, everything that we detect automatically goes through some of the investigation processes that a lot of developers have to spend a lot of time on. So we do a deep inspection that helps us eliminate potential false positives, understand how the code is executed, validate findings such as API tokens that are found in the codes to make sure that it's like an active API token, and also understand how a potential vulnerability that was detected is actually exploitable. How is it reachable? And really finding that the right combination of factors that turn an alert into something that's actually critical. And it builds all the justification that then the developers can leverage to understand why is this a critical problem? How is this a problem? And then when it comes time to actually tackle the issue, we don't want to just generate a random fix so that they can fix the issue. Rather, if organizations today or vendors are using AI to generate code fixes, it begs the question, how many of those fixes that are generated actually also contain vulnerabilities within them? And so we actually came up with a concept called verified fixes where every fix that we would provide to our customers, it actually goes through multiple verifications to make sure it fixes the vulnerability that was reported. It doesn't introduce new vulnerabilities. It doesn't break the code in logic. And so there's a lot, it's not just about using AI, it's how do you use AI? I think we have, if not the most robust pipeline today to actually scan codes in comprehensive manner using AI. And I will say the three things that we really optimized for are accuracy, scalability, and cost effectiveness for our customers. So we have dedicated GPUs curated in order to make all of this magic work. And so it's very easy for us to get going with it for customers and happy to show it to anybody who's interested. Yeah, thanks so much for sharing. I like the notion of verified fixes and the overall strategy here. If people want to know more about the Army's approach to this, can visit armys.comappsec. I think we can wrap things up here and I just wanted to say a big, big thank you to all three, to all my panelists. It's really been great to have you all. As I mentioned earlier, the fact that we have this vendor perspective, we have the research perspective and the end user perspective really makes it worthwhile, really makes it key for our audience to walk away with something tangible and practical. So really appreciate that. So thanks for watching and hopefully we'll see each other soon at the next RMS webinar. Have a good day.