Video: Why Endpoint DLP Is the Foundation of Modern Data Security - Webinar | Duration: 2812s | Summary: Why Endpoint DLP Is the Foundation of Modern Data Security - Webinar | Chapters: Welcome and Introduction (33.61s), Evolving Data Challenges (166.305s), Endpoint Risk Analysis (314.375s), Endpoint Risk Concentration (510.505s), Endpoint vs Cloud (642.49s), Data Lineage Tracking (812.025s), Data Posture Management (889.14s), Data Lineage Tracking (976.555s), Data Lineage Tracking (1334.73s), AI-Enhanced Classification (1523.63s), Lineage Capabilities (1708.46s), Insider Risk Types (2096.725s), AI Native DLP (2171.98s), AI-Powered Classification (2363.31s), AI Platform Integration (2555.97s), Q&A and Closing (2648.02s), Endpoint DLP Foundation (2840.49s), Data Tracking Challenges (2983.445s), Endpoint Risk Amplification (3119.035s), Endpoint Security Risks (3314.205s), Data Lineage Tracking (3554.83s), Data Lineage Tracking (3695.88s), Exfiltration Attempts Blocked (3843.205s), Account Differentiation Features (4046.78s), Context-Based Classification (4141.61s), Insider Risk Detection (4903.605s), AI-Native DLP (4978.86s), AI-Powered Solutions (5210.825s), AI Platform Integration (5362.85s), Demo and Next Steps (5457.285s), Q&A and Wrap-Up (5506.57s), Closing Remarks (5590.56s)
Transcript for "Why Endpoint DLP Is the Foundation of Modern Data Security - Webinar": Good evening, everybody. I know we're on time right now. We might just give it one or two moments to see if, anybody else joins and then we'll kick it off. I'll, I'll introduce myself really quickly first. My name is Ben Crocker. I'm a senior director of product management at Cyber Haven, and I'm responsible for the, endpoint elements. So the, the stuff that we're gonna talk about in the topic today is, near and dear to my heart. As I say, well, maybe just give it one more minute, see if anybody else joins on, and then we'll, we'll kick off. I believe I was muted for a moment there. I'm terribly sorry. Good evening, everybody. Ben Crocker, senior director of product management. We're gonna talk about why endpoint DLP is the foundation of modern data security. You'll see, inside the the Goldcast, option, there is a q and a, and a chat on the site there. If you have any questions, please feel free to drop them in. I'll try and stop and answer any of them as we go along or, if necessary, save them, until the end. The world has changed, I think, and I think we all know this as we see all of the things we're seeing in the news, all of the things we're seeing in our own environments, and all of the changes we're seeing in the infrastructure that we use. Data is constantly moving in our environments. Of course, AI is the the big topic that's generating the the friction right now. But, of course, before that, collaboration platforms and, SaaS applications and local devices and BYOD, o BYOD and all of those kind of things were meaning that your data is is immediately dispersed, among side in in most of your environment. However, DLP as a technology maybe has not been changing quite as rapidly, as those things. And so, you you know, we have to ask a question now about not necessarily the DLP or some of your data security products. We're looking about where your data was, but what is actually being done to it? What's it being used for right now? The Cyber Haven Labs team, did a whole bunch of, investigations on on these things and, specifically looking at AI and looking at the the tools that we were using to protect, our data. And if we were taking agents into account that we were actually finding that in the last, two years since we've been running the the checks on these things, that we're seeing something like a thousand time more workers, individuals as entities, turning up inside of the in our environments, of course, AI agents being a huge number of those. But we know about the problem with false positives in the tools that we're seeing inside our environments. I'm not gonna be telling you anything new when I say that that's a really huge number. And that the vast majority of what of what we would call exfiltration of data are actually not coming from whole objects, so not something you could scan or have a hash for and look for those things. But in fact, snippets, compiled snippets of data cut and paste from here and image added from there, are not complete files, making it really difficult to use kind of traditional mechanisms using, maybe, hashes or signature matches to be able to to pick those things up. Cyber Haven's platform is specifically there to look at those things. We'll talk about how lineage and the the capability we have is there to to help for that, and to make the tracking of the movements of that information, even the snippets possible inside your environments. And I I would say this, I think, because I am an endpoint product manager in at Cyber Haven, but, that the endpoint actually was always the area that had the most significant risk. If nothing else, because there were many of them, they were often disparately located, geographically connected to different locations with different users often, of course, with different sets of privileges on-site them, on them, allowing people to do things that maybe weren't necessarily supposed to be done, but also some of the things that would be considered part of their everyday work that a developer is supposed to have code on his laptop. A finance analyst will definitely have data from your financial systems, in there. An IT admin would have conflict files, would have credentials, on an endpoint, and they're supposed to do those things. But that the mere fact that they have those and that those devices are disparately, you know, located and the security controls find that difficult to deal with means that endpoint wasn't and probably will remain the biggest risk for exfiltration. And then the difference between the kind of then and now of what we see is that AI didn't really create that risk on the endpoint, but it may have multiplied it by many times with regards to the number of actors you might see but also the speed with which those things occur. And if you think before maybe that we would have considered it human speed that the person could type, we could get issues or alerts generated in our platforms as fast as a user could perform the actions and do those things. When now if we're looking at an agent or we're using, you know, an AI app of some description, that the the actor isn't just one person. It's one person that spawns any number of other virtual people and the decisions that they make come at machine speed. When we look at the the data movement, those things would be, of course, also coming from a USB, maybe in a personal email. Now if you think about Chat GPT, you can put it into five different MCPs in a fraction of a second maybe to look at those things, and the volume similarly that is massively increased over time when you look at what AI and the tools that we're working with now. Where those things happen, of course, still a lot of these things, if you're connecting to an MCP server, that has come from a system where a user or another agent, of course, has initiated that action. You can still do it on there. There is an endpoint there. The data that it had, the data that is gathered, the data that it has synthesized are all located there as well. And, again, another quote from the the Cyber Haven Labs report from 2026, is that, you know, coming up to 40% of all data movements, going into AI tools that or what we might consider sensitive data, whether that was code, whether that was, you know, company, information, finance data, those kind of things. People are are, are using those for all of the right things possibly without necessarily the understanding of the circumstances. While the interface into that data and the tools that are using them maybe have changed recently, the fact that those things are resident on the endpoint did not. And we'll look about, the first part of this I want to talk about is specifically that endpoint problem. And here, the kind of tagline there, where intent becomes action, where we see risk concentrate, and that's because it is the sharp end of actions that are occurring where user intent is turned into behavior. And those risks, we kind of mentioned these personas before, but, it it's really true that AI has increased the size of those things, not just because, it can do so many things, but we are able to do so many more things. And so a user's interaction with the, with, the data that they're working with and the types of data that they're working with definitely change dramatically over time. And so, you know, obviously, we'll see a developer copying, source code into an IDE, and it will be AI enabled. And it will be enabling them to do many things much faster than they were doing before, and we want to enable them to do that, of course. Similarly, the finance analysts looking for a report, the data scientist may be using an AI enabled platform or putting those things in there to help AI process that data and do things faster. And, of course, the extra user, the new kind of user we see at the end now where the AI agent, which may in fact be autonomously doing those things, aggregating that data, checking all of your emails, and giving you a report or taking the summary of a meeting, and, responding back to the recipients with their action items. They may have absolutely no approval associated with those things, not as in, it was never approved, but that in each run, each instance of those things, the AI agent is able to do that because we want it to. We want to be able to aggregate those things, and we want to do those. So these things are definitely not edge cases. They're definitely not something that is, you know, limited to a specific location. These are the the circumstances that we're seeing every day with our data, and we want that to occur, but it has an implication and it has, you know, requirements for us to then make sure that we're securing that. And the other side of that is that we've seen, of course, a change in the industry of a very strong focus with DSPM and other tools to look at cloud only. Our data lives in the cloud. A lot of our data lives in the cloud and there is no doubt about that and we shouldn't think that we don't need to look at it. We absolutely do. But what cloud controls controls can see, you know, they're the obvious things that are there. We can see that data at rest that might be in a SaaS application or in your in your own internal data stores in drive or in one drive or whatever it might be. We may see things like access events or file sharing and permission changes, and we may know the domain, the traffic, or the we know where it's going to on the edge. But what they won't see, of course, is what the user did before they got to the SaaS, what they were doing with the cut and paste of the data that came from there. Again, those snippets of data that are super important with regards to security. What we might do if we were looking at CLI actions and, of course, in the the new age of, AI enabled development tools, the claw codes, the codex, and the Gemini of these worlds can all be run from the CLI. And in fact, maybe that's preferable in in many ways, but they would not be visible to our cloud controllers there. Those scripts and the other background processes that, you know, the co works of this world, that will be doing things under the hood that are very, very hard for us to monitor and understand what is happening with our data. Again, not only what goes in, but what the data that comes out looks like and the component parts of that. And then the unsanctioned app uses the the kind of gray apps or the, the unmanaged apps that you may see, the shadow, IT of this world, not visible to those systems that focus on the cloud. Another thing that we think is super important specifically for cyberhaven is that while some of those cloud providers now may be looking at an adding endpoint capability into their platforms, they're adding it on to a platform that already existed. If we think about what we did at Cyberhaven, we've always been there on the endpoint. We believe it's, again, vital to your ability to secure and understand what's going on with your data. And so the years of endpoint engineering and the depth of knowledge that we put into those things, lineage that I'll talk to in in more detail there where we talk about the the bridge, the bridging the divide between endpoint and cloud and having lineage that is truly global instead of just local. And that for us to be able to deliver those things, we didn't need to rebuild our agent and to be able to cover those endpoint things. And, again, in the AI world as we talk about the the things that we're adding into the platform, these were the form of the problem, the shape of the problem that we're looking at already existed and we were already looking at it. So there was no rebuild for us required. On the subject of lineage, and a a brief description. Lineage for us is, actually exactly what you might imagine lineage would be if you were looking at a family tree. We will start at the top of a process of movements, creations, or whatever it is of a piece of data and we will follow it through on an endpoint across endpoints inside the cloud coming back into your endpoints there and we create that lineage, that chain of events that occur as we go through, and we use that to help understand what is really going on, what the data is, where it's been, why it would be interesting to someone. As I mentioned, we start with that on the endpoint, we look at all those events we're saying, the copies and the paste and the exports and the downloads, the CLI actions that we might be able to look at. But, of course, the things that are happening inside a browser. So, you know, looking at, it as we work towards a kind of a true, AI security product inside the platform is we look at prompts and submissions and responses, but chat messages and emails and all of the other things that occur as well, as well as those things, the local scripts or the background processes. So that's kind of that ability there to search for those things once on an endpoint and and find it then across the rest of the environment when we talk about cloud and and SaaS tools. So CyberHaven has a DSPM component where we will look at data at rest in all of those environments that are out there that you have for SaaS where you have your storage and all of those other things. Looking at access and and sharing against Google and OneDrive and those and the sub other SaaS apps that are important to you, looking at the the vast volume of data that might be there. We hope, of course, that those SaaS environments are, well managed and under control. And, of course, there's there's either tools out there that will help you and do that. But our idea is that we can give you those posture findings linked to what we see happen in the endpoint of that data that goes into the cloud and, of course, comes from the cloud, goes into the endpoint and transfers into other locations. And then as we think about what's happening with AI going forward, those, again, those snippets of data, those elements that go into a, into a chat or a communication with an AI tool, you're sending in a prompt and you're getting back a response. And the accumulation of those things ends up that conversation, is super, is super important. Knowing where those elements of data came from, again, those snippets that might have come from a file on your endpoint or an upload of a file that was there or a download of the image that was created. You know, super important to understand, again, added to that lineage, that graph of movement that we see and helping you understand what's really going on. And here I'm gonna show a few examples of what we've got, what you might see if you were looking in that that lineage graph. We've got an example of the cyber haven console here where I'm showing you, an alert, an incident that occurred. In this particular case, you can see we've got a dataset that's marked, and it says it's finance data. And then you can see as a policy that it's, blocking flows going to unsanctioned cloud storage. And, because these are a little bit smaller, I've I've done a pull out here to show what actually happened. And you can see in this case that the user, b Matthews, was uploading an x l s x file, so a spreadsheet called client master to Google Drive blocked by the platform. Again, we've said this piece of data may not go to this location. But what lineage allows us to do is see what else is going on there. In this example, you can see some events beforehand where Brian has downloaded the file, but actually just before that, you can see that Will had actually downloaded the file as well and you put it onto the client master and then it moves along. Immediately we're seeing four steps back in the lineage and you get an idea of the history of the movement of this object. As we look through the rest of this lineage, you can see that actually it's very long. This content has existed for a long period of time and we can see other things that were blocked as well. So there are other events that have happened from other users. This is users inside the environment on disparate machines all connected together in that graph of of lineage. Again, a blow up of this section where you can see, Tavia had actually tried to open the file with signal. Signal is a blocked application in the environment. It's you know, we're not supposed to use it. Maybe part of the result of this would be you get them to uninstall it from there. But we we saw it, and it was blocked, and it wasn't allowed to go there. Further up, you can see, that actually that file used to be called something else. And you can see that, again, Brian was blocked, so he's a persistent person. He's been trying again. He renamed the file, and that's where we got the original event that we saw the incident for. But you can see it was uploaded as client master t dot s l x. So he tried another name. Was it the name that was causing the problem or was it something else? And then we go even further into the lineage of this. So we've been tracking this for a really long period of time, all of the events that are associated there. And I'll pick up these ones, and you can see that coal, in this case, had taken the the new name of the file and actually tried to put it in Dropbox. So, again, another external storage location where we've said we're not allowing the data to go out there. Again, part of this whole process of connecting lineage or joining the dots of understanding the flow of that information. Then at the bottom here, again, Cole actually also thought, well, maybe I can I can get this out by putting in Gemini and have Gemini turn out something completely different? It'll transform this data. I can get it past the the DLP controls. Again, in this case, blocked as an action in here. We're saying, no. We can't send things into those into those, AI apps, and, prevented from being exfiltrated inside the environment. And then even more, there's a lot more of these, of course, we could show, but we can see in this case, external drive. Carl also had actually changed the file. He'd zipped the file and we see this as we go through the rest of the unit. He turned it into something completely different, called it family photos dot seven zip. So he put it through the seven zip program and tried to put it on a USB drive, blocked, and we wouldn't allow it to go there. So you can see all of those avenues of possible exfiltration all connected to the lineage of where that file was originally seen, how that was classified, how it was put together, and then shown here to be able to say, hey. No way. We're not gonna let you put that out through, through USB. We're not gonna allow you put it into signal. We're gonna block that. You couldn't get it into Gemini. You couldn't do any of those other things. And in fact, we've got a couple more in here. Again, tried to put it into Dropbox, tried to oh, sorry. That's going in one direction. Not bad. Tried to put it in chat gbt. Again, completely different name. In fact, this case, we see further down in the lineage, just changing the file extension on it. Thought maybe I could get it out as a docx and not as an xls if the those, the the rules associated with it would be, so simple as to be able to do that, then I might be able to rename it and it and it would work. I'm gonna stop for one second, and I can see a question from Tiago. Is there any feature or coverage that differentiates when data is going to a consumer web application or account, e d chat v g GPT, versus the enterprise equivalent? Really good question, Tiago. And, yes, there is. CyberHaven has, a browser extension that is installed or can be installed, managed or otherwise into into your browsers, and we track what we call the cloud acting user. So the log the cloud logged in account of the user that's there, and you can, of course, differentiate those between your corporate account or your personal account to be able to do those things. And so, you know, in our case, as a for instance, if I was not logged in with my cyberhaven.com account to Google and I was looking at my personal email and I was looking at my personal drive, I would not then be able to upload those things. For ChatGPT, it's available in, in their web app, in their, thick client app. Actually, ChatGPT and prod are both working on what they call hooks in the app that allow us to get that information out. That's something we're actually working on right now to be able to do those. Also, they have, audit APIs, at least ChatTubt have all the APIs. So the cloud connector capability that we have would be able to do that too. So the the answer to your question, Tiago, is yes. We actively try and do that, and you're very welcome. And so part of the what we were showing you there with all of those lineage connections rename that object, if you change the file type, if you zipped it up, if you try to use a different application, different users on different devices, we believe that context based data lineage is the answer to your ability to understand what is happening on your endpoint and, track that data, of course, not only the endpoint, but into the cloud back onto the endpoint as well. It's it's vital. So we think that's closing that gap. And they're seeing that data has moved, which, of course, we might be able to do just by tracing what we already do is that data in motion capability, but you might see that through a firewall or just the audit logs of of what you might have from those services, is be able to look at the various elements of what happens to that data. So the origin, how and where was that data created, who created it, which application, which user, which classification was when it first came out when it when it was there. Of course, in some of those cases, and I'm sure some of you maybe are Microsoft users, if we thought about things like MIP labeling and those kind of things, that would be the kind of thing that may also be on on those documents. And we can use that inside Cyber Haven then to add classification to those objects and then track that through lineage. The transformation element of that is another what element that causes problems for many systems out there that are doing that. Was it edited? If it was, that changed the hash. Sometimes that's enough to mean that some people can't see it. And we saw in the linears examples there. So just renaming it, zipping it, doing those kind of things. In a lot of cases, that might get you around traditional DLP technologies. Edits, copies, merges, renames, format conversions, lineage is tracking those derivatives. It's not just about the the unique identity of that origin file, but it's the content that was there. It's the intent that's there as we talk more about how we've integrated AI into the platform, the additional context based on the kind of intent of that document, not just the content. It's not just that we match the strings. We're not just looking at, kind of old school DLP and, content inspection policies. If we look at that intent and we understand what's in there, then that's all added into that lineage capability so that it allows us to track that stuff. And, of course, movement is important. Which channel it went through? Was it in an email? Was it sent in Slack? Was it put into Drive or OneDrive, and how was it moved? All of those things are are relevant. And so lineage there is talking about not just knowing what it is today, where it was derived from, how it was renamed the conversions. It's the foundation of our classification service inside the the the platform. We'll talk about provenance, maybe sensitivity, those context based risks as well. Provenance, super important, of course, to to allow you to say, hey. This thing was internal. It came from inside the organization. Our AI classification capabilities are I'll talk a moment in a moment about those, are there specifically to help us be able to do that. So lineage and context is classification that works as we were saying. We took that lineage, what happened to all of those things, how it moves through the environment. And relatively recently, we've added in capabilities to use AI and use the information that we've got from lineage to then be able to tell you about Providence. Where did this thing come from? It might have changed format five times. It may even be a screenshot of the thing that you really care about. But AI can help you make sure that you link those two things together and, be able to, be able to track them inside the environment. And so provenance there, is this internal? Was this partner data? Was this external data before it came in? You can't do it with traditional systems. Lineage knows about the sensitive source that's there. Add in the the AI capability that we've got to look at context as well as content, and provenance becomes something that you can that you can really work on. Sensitivity as well, similar. It's fine to be able to say, hey. This thing is, you know, a PDF document. It's it's sensitive or it was labeled as those things. But not only if I were to go back in and edit that thing and remove the sensitive content, should I change the sensitivity of that of that object or should if I pasted something in from another location, should I change the sensitivity to? Again, context says that it's not just the fact that it came from internal or it came from the system, but it's that it came from the system and it's got these elements to it. And combining those things together says, hey. This is really important. Our system works really hard to to combine what we've got from the context that we get from the scanning that we've got immediately as well as lineage and all of the other things that we know about it to to give you real sensitivity of the object. And then that context based type detection where we say, sometimes it's it's easy to suggest that I can just say no PDFs or land out of the environment or, you you know, no graphics files or land out out of the out of the environment. But as it says in the example there, a file moving from finance to an external cloud account following a resignation, is that different to the same file shared inside a Teams folder inside the local thing? Yes. It absolutely is. Not just the file that's there, the event that occurred that goes with it. And that's the difference between, using lineage, looking at events as well as content, combining those two things together to be able to say under this context, this is a completely different kettle of fish. So classification enriched with pros provenance, sensitivity, context, and apply to every object that we detect inside the inside the environment. Using those AI capabilities, we have really great results with regards to false positives. We'll talk a little bit, and I'll show you a linear result in a moment. But when we apply linear AI to those things, we get fantastic reduction in false positives, and that's leading you to know about, you know, true events and real things that you need to act upon. And, Yannick, I can see a request from you. Are semicon specific files also supported like GDSI? I actually don't know that file type, Yannick, which probably immediately suggests to me that, it it may not be. Now the only time that a file type is truly relevant to us, though, is if we want to do content inspection. So if you want to read it and be able to look for, a particular string or an element in there, that's when that matters to us. All of the other elements that are inside here are about us being able to track through the context of its existence, its movement, the users that have touched it and do those things. So it doesn't mean we can't classify that object because we can look at all of those other things that are there. The last thing I'd say is if, the the GDS, the GDSI or GDS two, I guess, it might be in there, if those elements are available to either be converted into something that can be scanned as an image or scanned as text, then the possibility is there for us to support it in the platform. So, not a definite answer to you, but I think, in general, lineage there is there to make sure that we don't have to be able to read the content. If we see that object and we can trace it through the environment, then we can apply a lot of our capability with regards to contextual lineage based classification straight away. David, your question there, if you have different data flow rules for different clients inside behaving intelligent enough to understand the client's data flow, yes. So the the answer to that is yes. Actually, in our particular exam in our particular layout of the platform, the every instance of the capability to look at the classification of content, whether that's contextual or content based is individual to each client. So it only knows about the stuff that's there from the client unless we're teaching it with the kind of generic detection rules and those things that we distribute to everybody. So when we look at some of the classification rules, and I'll show you some examples in a moment, they're created by us and they're generic. You are able actually to modify them, either copy them and use them for yourselves or add to them to be able to make them very specific to your environment. But the detection engines and the AI capability is only using your information. So the answer to that is yes. It should mean that, if you have an environment for your for your for your company against somebody else's company, The data between those two things don't distribute between them. And, yes, it's specific to your to your data. Another thing that we really think is important about lineage, honestly, and we mentioned about the false positive reduction there, but it's the acceleration of investigation. It's the leading you immediately to the thing that are that are interesting and relevant to you. And I showed in those examples before where I showed the very large lineage and, of course, I deliberately chose a very large lineage to show you all of those different events that could occur. But if you think about, when you see a system that doesn't have lineage there, all of every alert is is created as a potential instance. Everything that you bring up is is important in that way. You might need to, you know and if any of you have used kind of traditional forensic tools, you might be looking at a timeline and creating those things that were there. Of course, the lineage does that for you. It shows you the connection between all those objects. It shows you that it was named this before and it was this user and it was on this endpoint. And it does that immediately to show you the contextual reason, the historical reasons why this thing will be would be interesting. So it's that, that alert is instantly contextualized with the history that is there. You see the origin. You see the transformations. You see the movements of those things. You could get a clear answer about whether it's a risky action or a normal workflow. And, of course, not only in your policy do that and you can configure those yourselves, but the AI capability in the platform is looking for the needles in the haystack, the truly, out of the box actions that we see, those things that are definitely not normal, the anomalous behaviors. And so the the linear AI capability is there to do that for you as well. And in fact, there's a capability specifically to say that Linear can generate incidents for you to say, hey. We found this thing totally not normal. Ben has never been to this website and never downloaded or uploaded the content to it, or he's never been had access to this location. And and we'll we'll bring that up and we'll set it, of course, as a higher priority incident for you, and you'll be able to look at this straight away. And so that's allowing security teams, as it says, to focus immediately on on, on the incidents that matter. We mentioned as well about the difference between what local lineage that some vendors may say that they already have and global lineage that we look that we believe we provide. We can get it for you along the endpoint. If a file moved from one folder to the next folder and was was actioned by a, an individual app, we can see that. And our endpoint will do that for you as well. But the fact that we combine the Cyber Haven platform with that endpoint capability and the DSPM and the cloud connectors that we have those there allows you to go not only on the endpoint, but into the cloud onto another endpoint and through to some other location. And exactly the lineage I showed before where we saw with Cole and Tavia and and Brian all looking at those things, different endpoints, different locations, different cloud locations for the upload and the download, tying all of those things together is what global lineage does and allows you again to make those contextual, answers about questions. Something that came from this location was transformed three different times and turned into something different, we can still map those things together. So, again, build from the endpoint up, not bolted on for a cloud product. And then we we show that in some of the users that we were seeing here, but, if we associate the fact that every endpoint has a user, of course, the new world says that some of those users, some of those entities that are on there may be, coming from agents, but there's a really big proportion of data breaches and data loss that come from those insiders. They don't look like attacks. They work like normal. They look like norm like normal people. You know, from the kind of medium risk, the negligent employee or the temporary insider that may have access to an element and then and then lose it all the way up to, you know, the flight risk, the retaliator as we call them. All of those things are are connected to the way that they interact with data, the kind of data that they that they have and can move those things around. And so, really, we we think that the endpoint is hugely important when it comes to being able to pick those those up. It is, allowing us to understand their normal behaviors, the actions that are there. Of course, if you have someone that, you know, the flight risk that the leavers and the joiners, are quite often that either end of the life cycle of an employer in your environment are actually slightly more risky than the other ones. They're new to the environment or they're about to leave. And so the platform can help you differentiate between that. We can mark use specific users as those things. We have an an option to elevate the risk rating associated with those. And so if they are performing actions, they are seeing things that are happening that you wouldn't normally expect, we can pick them up and we can show them to you. We've talked about AI a little bit before. I think I wanna just expand just a little bit on that and why that makes a big difference to what's happening on the endpoints and say that, you know, Shadow AI usage, again, if with the question that was asked about whether we could differentiate between corporate accounts and and personal accounts, You know, one of your users has a real preference for using, using ChatcherBT, but for you Copilot in the organization is is, is the mandated. AI, that usage that happens in there, you know, you know you get it's hard to regulate for it. And, of course, you want the users to be able to do the best, work that they can, but picking that up is really difficult. Agentic workflows, again, they are autonomous, semi autonomous, and they are moved between devices. They move data between devices so much faster than maybe we would ever seen before. And some of the controls that you may have in your environment can't look at what's happening inside a prompt that's coming through HTTPS, from a web browser or a thick client app, or it doesn't touch the cloud and say your CASB can't pick it up. What we think that native AI native DLP looks like in practice there is that you must have visibility from the endpoint and you must be able to see into the AI applications that are working there. That lineage based enforcement that you can't just use content, It doesn't work. It's not gonna tell you what's really happening inside those things. And a lot of the time, it's not possible to actually scan the content fast enough to be able to keep up with what's going on and with movements. So lineage aware, context aware, enforcement is hugely important. Applying that to agentic workflows, again, it's massively expanded the the size of your workforce. One one human user may have ten, twenty, who knows how many, AI or agentic users associated with them. And we think that it's super important that we that's part of the platform that you already had. It's part of a platform that was built there to, excuse me, to manage data risk and look at data movement, already. And we think then that that endpoint visibility is the only constant that is available when agents control data, and that shift is really happening already when we see that it's in the app via AI. We'll move to via AI via an agent. So you're connecting those two things together. And, again, it mentioned that that idea. One session inside a, an AI app or inside a browser with an AI may contact who knows how many, MCP servers or tools that are associated there. And if you don't know what they what they are, you've got no way of knowing what we don't what might be happening to the data. Those agents, they execute somewhere. They're coming from the endpoint, and that is super important from there. That semantic risk, the understanding of the content that's going into those things, both going in and coming out, by the way, of course, you need to know where it's coming out and going to afterwards. Super important that lineage was built to be able to do that. And so take what we already did and is available there with linear AI, looking at using models to give you the contextual awareness of what's in there, looking at that for classification of content inside the environment, but also classification of the locations they're going to. We have a catalog of 800 plus, AI applications inside the environment, and we classify them based on the things that they're doing with the content as well as where they might be and who hosted them and where they're made, and those kind of things. So super important to have that that available, classification information so that you can make good decisions about, about what's happening. And the last part from here is to say that while AI is part of the problem that we're seeing and why you need to have control over your endpoint, it's a big part of the solution, as well. And I mentioned a couple of times a linear AI inside the platform. I've got small examples here. The top one is, an AI generated incident in the platform where AI has looked at this and said, in this particular case, the user Dave Stewart from our environment, another one of the PM team, has uploaded Japanese technical documents containing machine learning implementation details. Now number one, it was in Japanese. So the language didn't obfuscate it from the platform. We could it could read it. It could understand it. Of course, large language models, it could exactly that kind of thing. And he took it and then put it into his personal Gmail account. Again, asked about whether we could differentiate between those. In this case, definitely, Dave put it into g mail dot com under his personal thing. The document which then comes from a particular, dataset in our platform, in this case, this has research and technical documents, was uploaded to the external webmail website. It's immediately picked up and set as critical. It's set as an AI and policy in this case. There was a policy cover for it, but AI actually ratcheted up the severity of that that button saying, hey. This is super important. It's definitely something that you don't want to occur. And then at the bottom, I mentioned the classification capability. So there is an AI classification capability. We would put what we call labels, cyberhaven labels on those objects when we see them in the platform. But they're defined in plain English, which means these are the ones that we generate, but you can actually take them, test them, tune them. You can add exceptions or definitions or or other things to that plain English definition that the LLM uses to be able to pick those up. And so you can see there reports detailing specific outcomes for security investigations. So that's a security operations report. But it doesn't apply to these things. So adding in, taking out, being able to achieve them like that. Similarly, financial documents, similarly, billing and invoicing. And actually, the whole system, we provide a whole group of those that we include in our default labels, but the system's there for you to generate and create your own depending on any set of characteristics that you can, you can, provide with an ability to kind of test and tune. You can see the button on there, upload a file, see whether your your English language, your written language, and definition of that label will, will show what it is. Next one, I'll show you something that I think is, is gonna happen more and more in many platforms, as we go forward. In this case, it's, Claude open on in my desktop, in this case, Claude code with what we call a a capability. We're working on the back end called the Cyber Haven analyst plug in. But in this integrating with the APIs of the platform to allow me to do tasks that either would take multiple systems to occur otherwise or would be, you you know, something difficult to achieve or a difficult format to represent if I were doing it in the UI of not only our platform but maybe any other platform. And in this case, I've said, hey. Which which policies are ready to block? And there's a skill included in here that says blocking analysis. And in this case, it says then, okay. How how long do you want to look? I look at the last seven days. That's what I did. You can see it runs through a bunch of steps and says, hey. We generated a report. Let's have a look. And I get a blocking readiness assessment that says, hey. We went through. We looked at all of these policies. You can see the policies that are active. You can see the policies that are current currently blocking. You can see the ones that would normally block in in the name but aren't blocking. I think I think we're all going to see that, AI interaction and the interaction via MTPs or other capabilities in the platforms is only going to increase over time, and it has to be part of the solution, for us all to look at the the wider, the bigger, problems we have with security, on our, on our devices and our infrastructure, specifically our endpoints. And that is actually the last slide in the deck for me. I know there's one more question, and Yannick will get to that in one moment. What I do want to say is, if you've got DLP and you want to have a look and see what's happening with cyberhaven, you need to look at what's going on with AI or those things in your environment, please, you can do that direct from the website. There's an option to book a demo and you can do those things. If any of you are already, Cyberhaven customers, well, firstly, thank you for trusting and and working with Cyberhaven. And if you're looking at any of the capable the other capabilities we've mentioned here where we looked at AI classification, we talked about DSPM. We do have a, a jump start option going on at the moment where you can get in and look and see what's happening, look at your Google Drive, look at the OneDrive, already protected in data in motion, but we can show you what's happening with your data at rest. So that was the last slide for me, and I know there's a class there's a question from Yannick. If the classification in Cyber Haven is already being done automatically, would you still use Microsoft labels in SharePoint? It's a good question, Yannick, and the answer is we can if you would like to. And, actually, inside the labels capability, we can or inside policy and the definitions we have in the platform, you can apply them. So if you want if you want labels to be the the final decision on what's going in there, you can put them into policy and you can use them. In the very near future, the labels capability we've just talked about, which is based around what we see from DSPM when we're pulling there, will include third party labels, which would what would be, SharePoint and those things, and you can include those as well. So it would be totally possible to rely on labels if you wanted to use them, use them as an extra source of information to be able to do that. And, actually, in the not too distant future, use Cyber Haven and the capability we've got to prove that your labels are being applied and used properly. So go the opposite way around and be able to understand what's really going on. So, yes, if you wanted to, but not a requirement. Cyberhaven will do those things for you. We'll use our own labels, cyberhaven labels as we call them, but we can integrate, additionally those ones from Microsoft. Alright. And I think we are we are one minute over time. You're very welcome, Yannick. It was my pleasure. But one minute over time. Thank you everyone for for joining. If you've got any more questions, please throw them into the the q and a now. And, otherwise, I think we will be at the end of the session. And I'll thank you very much and wish you all a very good day. Thanks, everyone.