Video: Making Sense of Summer 2025 AI Trends: Agentic AI Edition | Duration: 2932s | Summary: Making Sense of Summer 2025 AI Trends: Agentic AI Edition | Chapters: Welcome and Introduction (0.3324655551118241s), Introducing Agentic AI (95.88746555511183s), AI Research Introduction (194.27244555511186s), Agent Evolution Spectrum (274.89746555511186s), Transforming Systems of Record (526.8674655551118s), Data Access Evolution (895.2825155551119s), Identifying Workflow Choke Points (1103.5023655551117s), Agent Ready Workflows (1182.2374655551118s), Agent Readiness Assessment (1300.0373655551118s), Orchestrated AI Ecosystems (1435.0974655551117s), AI Agent Orchestration (1496.8574655551117s), Orchestrating AI Agents (1594.3124655551117s), Orchestrating Agent Systems (1655.7523655551117s), Interoperable Agent Systems (1738.9924655551117s), AI ROI Evolution (1826.5124655551117s), Measuring AI ROI (1986.5824655551116s), Maximizing AI ROI (2141.857465555112s), Q&A and Grammarly Pitch (2332.727365555112s), Q&A and Conclusion (2441.047465555112s)
Transcript for "Making Sense of Summer 2025 AI Trends: Agentic AI Edition": Alright. Hello. Hello. Good morning. Good afternoon. Good evening, everyone. Thank you all so much for joining. Wow. It is great to see so many people, joining. The numbers are ticking up. People from all over the world. This is awesome. Alright. My name is Luke Behnke, and I lead our enterprise product management team here at Grammarly. And I'm excited to be your host today, for today's session on making sense, of, AI trends here in the summer of twenty twenty five, specifically the, AgenTic AI edition of AI Trends. Very excited to talk through that, with you all and with our special guest. Before we get started, we'll get started here in just a minute, just a couple of housekeeping notes. So there's a lot of people on this webinar. Don't worry. You're all automatically muted. And so just so you're aware, no no way to unmute. If you need to enable closed captioning, please click the CC button at the bottom of your webinar window, also pictured here, And we would love to hear from you throughout this session. So find the little q and a widget, down in the bottom of the screen as well, and we'll have people working behind the scenes, to try to answer as many questions as we can during this webinar. We'll also do a brief q and a at the end, as well. And don't worry about taking notes. Maybe you're using your AI notetaker, but we'll also be following up with the recording of this session for all attendees, and so you will not miss a thing, I promise. Alright. Let's get started. So, what we're gonna talk about today, we did a version of this webinar, Tim and I, back in January, and it was one of our best, attended webinars in Grammarly history. I think Tim is a good draw. A lot of people were excited about that. We got such positive response that we decided to do it again, and this time specifically focusing on agentic AI trends. Agentic AI is the word of the year, I think, so far, in in AI, so, a lot to cover there. Things are moving very fast. I think we're all aware of that, feeling like sometimes there's, you know, increasing complexity. Things are changing kind of faster, than we can all keep up with. And so our goal is to zero in on what really matters, give you the information you need to make smart decisions now and into the future. And we'll specifically focus on, three trends in AgenTic AI that we believe have the biggest impact, on companies as we, enter the second half of twenty twenty five. Hard to believe we're already entering the second half of twenty twenty five. These trends are, how AI will impact our current systems of record, like our CRMs, our ERPs, etcetera. Another trend is how we start getting agents to work together. And lastly, how are we gonna assess impact? What are those signals? And so to help me unpack those three trends, I'd love to introduce, you all to my cohost, for today, Tim Sanders. Tim, super excited to have you on today. Do you mind, introducing yourself, to the audience? No problem. Great to be with you, Luke. Glad to be back. Tim Sanders here. I am head of AI research at G2. Many of you might know us as, the marketplace where you go for software, a place where Grammarly rules the grids. I'm also an executive fellow at the Digital Data Design Institute at Harvard, and it is the AI research and future of work labs on campus. From I'm the author of several books including Love is the Killer App, which, you live that, that ethos, Luke, and I am arguably, one of the biggest techno optimists for Agintiq AI and the effective acceleration movement. Awesome. Tim, great to have you. We need more optimists these days, so I totally appreciate, you. And great great to have you here. So, you know, Tim, this is, something we talked about last time, which we got a lot of positive, response to. But this is, sort of your, grid, gradient on how we can think about this big question that everybody asks when they're trying to unpack agents? What's an agent? Can you just walk the audience quickly through, this this gradient? This is a really important insight. Agents are complex systems that can perform actions and have access to tools on your behalf, get things done in the real world. That being said, Luke, these days, every company says we have agents. Well, why not? It's the hype word of the last year and a half. Some people believe there's some threshold. Either you are or not an agent, but I believe it actually is a gradient. And it looks like this. It starts with the most low performance agent, which means the most basic agent, which is a a chatbot copilot, moves all the way to system. But let me give everyone an analogy of how to think about this. I want you to think of the continuum you're seeing on the screen as Waze, the app, to Waymo, self driving cars. K? With Waze, you get behind the wheel in your car. You put the Waze app in place, and what does it do? It suggests turn by turn directions using tools like real time maps. And, to get to the airport on time, even in traffic, it's pretty amazing. Waymo is a different animal. You get in the back seat and Jesus take the wheel. Right? It is a transformative transformative technology because the action is taken entirely by the agent. It's no longer a suggestion. And the reason it's so important to understand where we're going with this is that chatbots and copilots, think of it as a weak agent, they still add a lot of value. Many of you use things like chat GPT all the time, and copilots, like Grammarly, your writing assistant, move you from a to b to c to d very effectively. In fact, I would argue that Grammarly has moved into the realm of intelligent assistance, meaning they are more dynamic. They are not brittle. They can actually handle edge cases, and they remember you and preserve memory. But what we've seen over the last year come to light are more task agents that actually perform a real world action requiring access beyond its own tools to your tools. We're seeing the emergence of process agents that move across the lines in an organization doing an end to end multi turn conversation or performing that type of action, But what we'll see in the next three to five years that will change everything are systems of agents. I want you to think of it like a company with departments. These will be tightly orchestrated by a master or senior agent. They will all operate autonomously but also connect with each other so as to swarm against your goals. In the future, you won't have to build an agent or improve the prompting. What you'll do is very clearly share your goal, and the agents will swarm with brute force to perform that goal. Now the reason I'm showing you all of this is that agents have become a big deal for one very fundamental reason. They saw Parkinson's law. You can Google this. Cyrole Parkinson, a naval historian, wrote a humorous essay in 1955 called Parkinson's law. It later turned into a best selling book. His theory, which has turned out to be true based on research at Microsoft and Stanford and a bunch of other places, human beings will always expand the work to the time they're given to turn in the work. And what that means is that even though a chat GPT might make you 30% more productive writing code, it doesn't mean you're gonna write 30% more code. That's what we're beginning to see in some of these reports like McKinsey released. Personal productivity does not necessarily translate to organizational value. However, agents are different. They don't take that extra time to work. They don't watch TikTok videos. They don't take a day off. They grind. You can actually predict that efficiency equals velocity. That's why the expected compound annual growth rate these days, I would say, is about 60% year over year for agents between now and 2032. Pretty amazing growth ahead. Wow. Thank you for walking us through that. I love the reference to Parkinson's law there. That is, super true. Although, you know, I won't confess that maybe I can really, you know, feel that. But, yes, I think that's really interesting, and I agree with you. The the you know, I think there are some things, of course, that humans will continue to be uniquely good at, but there is this ability to and you use the word just grind through a task. Just, you know, read a a million pages of text in a a few seconds and Yeah. Come up with answers. Like, no human has that ability today, and so to use these tools like superpowers, you know, is gonna be so important. And I and I also love that you sort of think about it as a spectrum. I hear so much these days about, oh, that's not officially an agent. That's not technically an agent. Yeah. But I was looking at it this way, because it doesn't really matter if it's an agent or not. If it is Sure. Making you better at your work, making your company better at getting things done, I think it's a great sort of all inclusive definition here. So And I would say I can too, Luke Luke, that the problem with saying this is the agent, this is a fake agent, is you create change management issues. So now nobody trusts anybody. Okay? We just should say weak agent, strong agent. I like to call chatbots like, the Rob Lowe of the GenTech AI. Looks ripped, can't bench 80. Okay? So very impressive, but they can't actually perform an action in the real world. Doesn't mean they're not useful. I've used JAD GPT three hours already today. It's indispensable. So I think weak to strong, and they all count because we're just trying to get people to change the way they work. Yep. Awesome. Did not have Rob Lowe agent on my bingo card for this webinar, but I love it, it, Tim. Thank you for that. Alright. Let's jump in, to our first, agentic trend, this year in the second half of twenty twenty five. And this one's a really critical one, I think, which is, an evolution and how we're thinking about, how systems of record are going to change, not only how those systems themselves are gonna change, but how we have to rethink how we, use these systems of record within our enterprise. So just to be clear, you know, you can think of a system of record. You can see all these three and four letter acronyms here. This is really where critical business data is housed. You can think of it like a database, of course, but there's, usually a tool built on top of it. And, you know, if I work in sales, I'm in my CRM every second of every day. If I work in HR, I'm in my HRIS. But, of course, everybody, in the, enterprise is also in these systems from time to time, jumping between them to try to find critical data. And the way I was, you know, the past still still the state of the art, but the way of the past was then we connected some BI layer, business intelligence layer across these tools, you know, to try to really glean insights, across all of these different systems of record. And, right, that's, in contrast to systems of engagement, you know, where employees are actually spending their time every day, the the places where that they're actually engaging with this data. So, clearly, as we think about what AI agents are really, really good at, systems of record as we use them today are are going to change. You know, whether it's your CRM or your ERP, you know, as I said, the past kind of way of interacting with these is manually searching data, maybe porting data or creating data connectors between these systems, then trying to parse your own insights out of all of these systems. And it's, I think, pretty clear to all of us. And, Tim, I'd I'd love your perspective on this. Like, how AI agents are actually gonna transform, you know, these traditionally static kind of repositories of data, like, with all of these insights that are, you know, basically buried down there somewhere deep inside of this database? Like, how how can we turn these systems of record into systems of action? Yeah. Good question. Another cultural reference, not to play professor proton here. But when I think of a system of record, I think about it as the ultimate repository of accurate data that allows AI up to even agents to hallucinate less and make perfect decisions. I heard a great example of how this works, Luke. Someone says you're getting ready to write a check, but you're not sure it's gonna bounce. You either go check your balance at the bank or you ask your friends, how much money do you think I have in the bank? And they run inference. Now some of them know you pretty well, but you may bounce the check unless you just check your own balance. Sort of system of record is. But here's the problem. Right now, those letters you see on the screen are little prisons, little walled gardens that create switching costs for you to change from one provider to another in a world of agentic orchestration where you wanna be agnostic to any particular CRM or any particular ERP. In fact, there's some people that believe that system of agents that we just showed on the last slide, Luke, will use model context protocol that are known as MCP and just go scrape it everywhere that you have access to and compile your own proprietary system of record, which actually will have more data than any one of these squares on the screen. So I think with breakthroughs like MCP, you don't have to write an integration anymore. You don't even have to write a custom API. You express a goal and like a heat seeking missile, it organizes the data. That's the first thing we're gonna see. Generative AI continues to get better at structuring unstructured data. Now some of the unstructured data is nested inside the system of records and some isn't even in the system record yet. And as it organizes it, we'll actually have better repositories of data in the future. So everything you're reading now about hallucinations is only going to get better over time. But here's the final thing. Agents are gonna be very good at creating synthetic data from small seeds. Meaning, it's going to be able to create a sample of, say, customer activities in a CRM and scale that sample to the point where you can run optimization prelaunch, where you can run system wide fixes and ship code without having to do a small rollout. I think the idea of only requiring a fraction of perfect data to create a large panel of actionable data is something we'll see for sure by 02/1930, if not the year before. I think all of these are fascinating developments. But the TLDR here for everyone watching is if you think you have to buy your agents from any one of these squares because they own your data, you will soon be free to roam about and organize. Yeah. I love that. I and and by the way, that doesn't mean these systems of record are going away. I actually think they they probably become even more critical. And, you know, there is a big tension right now, I think, for a lot of these companies that own these acronyms. How much of our data should we make available to these, you know, MCPs or these other agents, and how much should I lock down? In fact, we've seen some vendors recently, change their pricing model, for instance, about access to this data. And so I do think that's something to watch is, of course, they have the right to monetize this data, and and maybe the the seat based model of the past isn't going to scale for them. But it is something to watch as a vendor. How you know, is your CRM vendor, is your HRIS vendor locking down the data because they want you to use their proprietary AI? Or are they opening it up? And, again, they have the right to monetize that in a way that's fair. I totally believe that, but I think it's really something for us to watch as we as we move forward. I think so. And the recent G2 buyer behavior report, we surveyed 1,500 people just like you that that buy software all the time. And, over a third said we wanna pay as we go, either based on consumption or based on outputs. We don't wanna pay subscription anymore. We don't wanna one size fits all. It's not fair to us. And what's gonna happen, Luke, is as agents start to write software, we're gonna see agents writing CRMs on their own that are super competitive with some of the best CRMs in the world, and that's where leverage is gonna change from the seller to the buyer, and then the game's gonna change. And they're gonna have to let the data go or they're gonna end up like AOL. Yep. Nobody wants to be there. Alright. Let's, dig into yeah. Exactly. Let's dig into, a a couple of action items then on this first, on this first trend. So, I think something, that, Tim, you and I talked about, when we were preparing for this session, which I loved is this idea of choke points. So really identifying, you know, as you look at who's accessing various tools, various systems of record, what they're trying to do with it today, and where they might be hitting bottlenecks. Can you talk a little bit about how you think about identifying these choke points such that you can then decide, where where you have zones for opportunity? I want you to think about all the tasks or think about all the programs you deliver as a as a professional. Right? So I even think about myself as a researcher. So I've got choke points. These are things that I can't seem to find enough time to do. If I could, I'd hire more people to help me with. So I have some choke points in assimilating all the research going on in the world into a single, like, document so I can have an informed point of view. That's a choke point. I have a choke point with social media production around the concept of choosing a topic and having a rough script that stays in my voice. I have another choke point on the actual production editing of that content. All of those are situations where an agent can come in, solve the choke point, and increase my velocity without me having to hire end head headcount or expensive freelancer services. So in everybody's workflow, there are those things and many of them are repetitive that you just seem to have debt in. Think of those as choke choke points and think of the agent as the thing that undoes the choke point through action. So that's how I think about it. It's a great way to kinda start with your goals, work backwards to your deliverables, taskify each deliverable, and then circle all the tasks that you just can't seem to keep up with. Yep. Love it. Alright. So the second action that I see here is starting with what I'm calling here, agent ready workflows. So what exactly is an agent ready workflow? Well, you know, obviously, Tim and I are talking about some of the big opportunities that agents will be able to accomplish in the future or near future or maybe even today. Seeing what I've seen from talking to a number of, you know, our customers, seeing how we're using things internally here at Grammarly, is, you know, if you can define, you know, a repeatable workflow, where there are clear steps. So one example of an agent we just spun up here at at Grammarly is one that's helping us, write our customer facing release notes for the product. And that was something that we were doing with a bunch of humans. Slowly but surely, we were going through all of our commits into GitHub. We were going through all of our Jira tickets. We were then deciding, you know, which ones seemed customer, worthy, and then we were writing up, in customer facing. It was a very predictable set of steps with very predictable integration points, and that was one that just jumped off the page. Now we can just have the LLM decide, which ones seem, important enough, to show customers. We we did a lot of, prompt iteration to make sure that works. And then the l l LLMs are great at kind of reading all of the information on that given commit or ticket and then generating in a customer friendly way, a quick summary. Right? And so this was what I would call an agent friendly, agent ready work flow, repeatable, definable steps one by one that we just sort of, you know, slowly but surely, converted in, to an agent. So Yeah. I think that's where we can start. And, of course, then we can go and think bigger and bigger and bigger from there. But But I think that's one example of, you know, an an agent ready workflow. Yeah. I like that. I like that a lot, Luke. It's really that's really clear. Let me give you another way to think about it. So sometimes there's a deliverable that the customer just appreciates you providing versus not providing, and they're willing to take something that might have a little jaggedness to it. So let's imagine that you've got an agent ready workflow like, like, system cards for new models or, in this case, release notes, and there's a chance that an LLM might get a couple of details slightly wrong. Well, the customer will give you some feedback, and you'll make almost a real time correction. So the blast radius is very small, but the retraction time is instant. It's milliseconds to fix the thing. But they're okay with that. When you see OpenAI release a model and put out a system card, which is the equivalent, to fill with errors, and they fix it in real time, everybody's happy. If they don't put out a system card because they're too busy, everybody's up in arms. This is why, Luke, you see so many agents now in customer service because these companies weren't even responding to people. That's okay. People are glad to get a response. So let me tell you where you wouldn't use an agent, and that's billing. That's different. Okay? If the agent gets billing wrong, people go crazy sauce. So you'll see that being one of the things we do last when we are just absolutely confident that the date is perfect and that the hallucination rate is less than one tenth of one percent. Because remember, you multiply the hallucination rate for every step the agent has to take. So some workflows are ready merely because the pain is so great. The recipient is happy for it to be a little jagged. And I don't know if you've heard this phrase jagged edge of intelligence. I've been thinking about, Luke. Last week, I was on the road, and we were talking to one of our Brazilian customers, and they wanted me on the call to explain string theory. So I got Chat g b t o three pro to write, like, perfect perfect Portuguese explanation to string theory. Like, it was technically right. It was perfectly Portuguese. But then later, I was home and I was gonna boil an egg. I haven't done it, to be honest. And I I asked the same model, like, how do you how long do you boil an egg? And and it said until it's it's nice and pink in the center. So it it's not always right. So some things may be wrong. You just have to learn where they're accepting. Like, the Brazilian customer was so impressed. But if I would have made the egg until it was pink, dinner would not have been served. So there you go. Wow. Interesting. Yeah. Yeah. Exactly. Those are great examples. Alright. I'm gonna move us along here. We kind of already talked about the last trend or the last, actionable item, but really thinking, you know, as as pay attention to what your system or record vendors are doing. Are they closing down data? Are they monetizing it in different ways? If they are deciding to monetize data, are they still charging you for seats? Anyway, you know, my personal belief is, you know, the the companies that play best with this open ecosystem, the ones who jump on this trend, you know, those will be the system of record companies that compete with, those new those new ones that are coming, to try to disrupt them. So pay attention to that, and, yeah. Alright. Let's jump ahead now to our second big trend, for, the second half here in AgenTic AI, which is moving from isolated, you know, agents, agents that are good at one particular job to an orchestrated ecosystem. And this is, very real. Also, still a long way to go in order to make sure that these agents can actually talk, to one another. But, you know, we are now, you know, two years, I guess, into the modern AI boom over two years in. And, I would say that most of our use is still very tool specific. It's still very isolated. Maybe it's within one single vendor's AI. Maybe it's a chatbot where we do a lot of context switching, copying, and pasting. And what I'm really excited about as we think and look forward here is AI agents, that could be connective tissue, and, the ability to actually string together a number of these, you know, like we have done for many years with workflow automation tools. But now these will be agents who can actually start well, a, they'll have the ability to do a lot more than our basic kind of workflow automation tools, and they'll also be able to spend more time kind of reasoning about how to solve a problem. You'll have to spend a lot less time setting up a perfect flowchart that works every time, and it'll be a different paradigm about, you know, really giving, this super agent, context over what you wanna accomplish and then trusting that it will have the right resources to go get that solved. So, Tim, I'd love, you know, your explanation for how we should start thinking about this and, maybe just share more of how you're thinking about agent orchestration. I think orchestration is one of the most important themes of 2025 because when you think about it metaphorically, today's agents are kinda spread out across different businesses who bought them for different reasons. I think of it like buskers in a subway. Got an organ electronic organ guy. You got a couple of guitar guys. You got a violin person. This is about the LA Philharmonic coming together to produce customer value as one because we know that as we orchestrate these agents together around a common data architecture or common model paradigm, there are emergent properties that come that none of us ever expect. Sort of like how chat g p t three five shocked the world when it started writing code, Luke. That's what happens when you get this kind of orchestration. So it's an exciting future, but it is a very important time for information technology in particular to take the reins and not be the department of no, but be the department of better together. And I think that's what needs to happen this year. Yeah. I agree. Great great topic that let's talk about kind of turning it in, you know, to real action here. So how do you think about, you know, really actually making sense of this? Is this orchestration idea real, or is it still is it still futuristic? And how would you break it down, Tim? I think it's real, and I think you orchestrate at the point of having more than a trio. So if you've got three different use cases for agents, you already need orchestration. It's one of those things you wanna get ahead of, not try to catch up with because if you try to catch up, you might have eliminated trust. Trust is important, Luke. If people lose their trust in agents either on the usage side, say, at the enterprise or the customer, it's hard to get it back. In technology, we say that for agents, trust arrives by mule, but it leaves by Maserati. So get it right and get it right early. Context is important. What is the goal and what is all the data that is relative to the goal? The main thing here is be very vigorous about context because you don't wanna use the general Internet that most of the the the chatbots use because agents will be autonomous. I think we've talked about workflows a lot, really started the deliverable work backwards. And then to your point, we need to think about the tools that connect everything together as well as having access to the key tools we've already been using for a long time to actually put things into the real world. Yep. Yeah. That's great. And and on the number two sort of action item here, I really think about kind of this layering approach. Right? Starting with one agent that solves some very specific problem and then starting to think about, you know, how they can start to communicate with one another, how, you know, you can make the task, more sort of multifaceted, and really start to string these along into more complex, decision making. So that's kind of the second trend here. And then and then last, but not least, I think, similar to the trend before is really thinking about interoperability. Right? Model context protocol, MCP, is starting to emerge as a bit of a standard. You know, Google launched a two a. We have Grammarly have some thoughts about how, all of this will come together in the future. But really thinking about not getting walled into a garden and and really think about tools that can help, that, you know, play nicely with others, that's, you know, really gonna be what's gonna unlock the ability for, you know, developers to start stringing these, or individual citizen developers to start stringing these agents together. That's right. And if they're all interoperable, you'll build your system of agents in the future. And, Luke, it'll sit on your balance sheet as an asset, not in your p and l as a cost. So the value of putting this all together as a puzzle is paramount. Yep. Alright. Let's move on to our last, trend here in the second half of twenty twenty five, and that's really an evolution of the ROI of AI. So, we at Grammarly have been talking about this for a very long time. Mhmm. You know, as, as people deploy a tool like Grammarly, they ask us all the time, well, what's the, you know, what's the ROI of this tool? And we've we've had as a vendor, we've really had to prove, that there is real ROI when you deploy Grammarly. So we've been thinking about this quite a bit. But I think even now, given all of this capability that is emerging, how we think about, ROI is changing. And, specifically, you know, historically, Grammarly has said, here's how much time we save your employees or, you know, here's how much more productive we make them. And then to your point about earlier about Parkinson's law, the the question back is, well, what are my employees actually doing with that time? And so I think there is a big shift underway, Tim, and I know you have a perspective on this. And I'd love to hear you just sort of explain your thoughts on how the ROI calculation is shifting in AI. McKinsey researched this, and they said that the opportunity for cost cutting is very small compared to the opportunity for accelerating an organization's velocity, putting out more products, acquiring more customers, being faster with GTM motions. You'll never be happy one to one with a substitute, so cost cutting is is is the attempt to shrink your way to greatness. The other problem, Luke, is if you approach AI and specifically agents saying we're just trying to save money, you're gonna have a lot of people internally that slow roll adoption because they're gonna worry it's coming for their job. The better way to sell this is the true way, and that is you can be three x more in a year with agents than you've ever dreamed possible. Because when a company can grow without having to add tuds of headcount, guess what? Everybody gets more bonus. Everybody gets more equity. No one's having to compete with an extra 5,000 people for the next promotion. The idea of increased personal scale is exciting, and I think it's a better way to look at it. Some people, honestly, these days in academia that I talk to think it's very premature to put ROI metrics on something as new as generative AI and instead look at it as something where you're willing to invest it to make the company better. If somebody asked me, like, what's the ROI of Grammarly? I'd ask, what's the ROI of writing? Well, you say, we set more leads with email. We convert more customers with marketing. It's like, yeah. And they boost that 30%, three x, 10 x, take your pick. I think that's how we should look at AI. Yeah. That's great. You know, that was, like, actually one of the the most critical just, like, general business school one zero one learnings. I remember early early in my sort of education was, like, you know, reducing cost is always going to have some fixed, you know, some some marginal benefit on the business. But there is an unlimited ability to increase revenue. Right? Like, that that is just like and I think that's true about, AI agents as well. Mhmm. And and I actually really love your point about shifting the conversation away from cost cutting actually makes employees more willing to adopt a tool when they don't necessarily think, oh, this is going to, you know, replace me over time, which, by the way, let's call it out. It's a very real concern a lot of people. You know, whether you're deep in this world or whether you've been hands off, I think all of us have a lot of questions about this moving forward. And so I think that's, that's a super great call out. Two other things quickly, and then we'll get on to our last slide and then hopefully some time for q and a. But, I do wanna call out, you know, something that we have seen very clearly. We've talked to thousands of customers about their use of AI tools, not just Grammarly, but other AI tools. And what they all tell us is, are my employees using the tool? That is one of the most critical starting points. If you buy something, it looks great in a demo, and then it sits on the shelf, nobody's trying it, nobody's using it, that's usually a sign that it's missed the mark in some way. Maybe you haven't done enough, you know, internal, enablement of your employees, but, mostly, it's maybe an indication that that tool is not going to deliver ROI for you. And so we really believe, that, utilization of a tool is such a critical first step. Mhmm. And Yeah. You know, from there, obviously, you can start to measure the bigger picture, like, what's the impact on the business metrics that I care most about. But I really see utilization as being, one of the the the first kind of critical upstream, metrics. And and from there, the ROI usually, comes out of it. I agree. And and, you know, when you look at ROI in general and a lot of it's fuzzy math. You know, they gotta know this and this and this and this, and that's how you get your money back. Utilization ain't fuzzy math. It's like dollars and cents. It's like 15% of your people are using it six hours a week. Like, what's so fuzzy with that? So I think utilization is the crispest metric that we could be using right now to analyze our investments in AI. Yep. Agreed. Cool. Alright. Let's jump in to some action items around, measuring ROI. The first one that I think is super critical goes right after what we were just talking about is monitoring, adoption, hotspots. Where are you seeing, uptake? Who is uptaking it? Which departments? Which groups? What are they doing, and look for those trends and then double down. You know, I think that is really important, just to really find those spikes, and understand where the value is already being felt. That's right. That's right. And and over reward the early adopters who are showing the others the way. They don't need to just be called out in a meeting. Buy them a Corvette. I just can't tell you how important it is for enthusiastic power users to create a tipping point in their own working groups. Right now, a lot of them operate in the underground because we act like they're lazy or something. So we as leaders have to play a part in this through the war reward system. Agreed. Agreed. And I think that really gets at these last two points here. You know, really all about making sure you're not just focused on some small goal, but, really raising the bar for employees, thinking, you know, if you say, hey. The goal is to cut 10% off this, you know, 10% of time off of this task, you might actually get the wrong results versus if you say, we should really rethink how we're doing this completely where you may see massive upside. It's just gonna take a little while to get there. And then the last point, you know, you have to treat this like, an employee education, you know, learning and development opportunity. We have to train employees on how to use AI, how to become literate. We need our users within the organization to become champions, early adopters to become champions. This isn't just going to happen on its own. There's a lot written out there about, well, if you don't use AI, you know, it's not AI is not gonna replace everyone. It's gonna replace those people not using AI. We we have to help those folks get across the line. It's not just gonna magically happen. And, by the way, not everybody has the hours in their week, to to sort of learn all of these tools, completely upend all the things that they've maybe done for the last thirty years. You know, it takes time and it takes effort, and it's on a manager and on leadership, to bring people along for that ride. And, obviously, it's gonna have, ROI for the for the company. I'll I'll lay one little nugget on you here. It's a new concept I've been working on, Luke. It's called learning effects. You've heard of network effects. Right? Yep. Learning effects, and this is based on the study of complex math or foreign languages. Learning effects for AI says every new AI skill that you pack and stack on is worth more than the last one, but you learn it twice as fast as time goes by because your brain is gonna adapt to it. Prompt engineering might feel hard. Later, you're gonna be vibe coding. Someday, you're gonna build your own agents with their own repos of data. You figure that out as a person. Research says you could double, triple your economic opportunity. So the value is there. We just have to ignite it in people. Amazing. Alright. Let's, jump into some q and a. Before we do, I would be remiss not to just give a little bit of a pitch here for Grammarly. You know, we have been, leaders in, AI for fifteen years. Yes. That's true. While generative AI is still new, you know, we've been working with natural language models, machine learning for many years, and we really see ourselves as, one of the first AI agents. We, came in. You know, you download Grammarly. You install it in your browser or on your desktop, and it just starts, you know, sitting right next to you, helping you, write better, helping you create, words from scratch, helping you improve, the outcomes of, your, you know, your goals when communicating. And so, we work kind of everywhere. 500,000, web and desktop applications, fall as I said, follow you around and just assist you as you're writing. And we're currently trusted by over a third of, the Fortune 500, for use cases all the way from helping with English as an additional language teams, like customer support team, respond to customers faster, to marketing teams who, generate content, that sounds like them personally and sounds like their organization, and, a number of, other use cases as well. So, if you, are interested in connecting with someone from Grammarly to learn more about how Grammarly can improve your productivity and business outcomes, with responsibly built AI tools, select yes, right now, on the poll you will see on your screen, and we would love to connect and see if we can help you achieve your business goals. Alright. Let's jump into the q and a. And alright. Digging through the q and a here. The first one, that I think, Tim, it would be great to hear your take on is how do you ensure data integrity and security, when agents, are accessing all of these sensitive systems of record that we've been talking about? You know, it's no different than robotic process automation, better known as RPA. It's about who your vendors are. Right? So your vendors are going to go through a vetting process, and the vetting process is gonna give you, the comfort with giving them direct access to both your most sensitive data and your tools. So it's important for not only your IT leadership, but your info security leadership to have a seat at the table. At the beginning of the process, they need to coauthor it with the business as they implement agents. They should not be the last step once the business has conviction that they are in love with that vendor. Yep. Great. We have a question here. Actually, a number of questions about, you know, what to do, when something goes, wrong. How are we actually gonna be monitoring these agents moving forward? And I'll I'll jump in on this one. I because I think this is a super interesting topic and super interesting question. You know, in, the last decade of of SaaS software, audit logging has been so important. The ability for someone in IT or someone to go in and say, hey. Something went wrong, or maybe, you know, there was an employee who went rogue or something, or we just had to check-in on some activity, and SaaS vendors built out all this audit logging. I I am really excited actually about, and I think there's probably, billion dollar businesses that will be built off of, really being able to understand what these agents are doing, you know, where they're getting stuck just like real employees get stuck today. You have a lot of rituals that you put in place to help unstick employees. Maybe those are one on ones or project reviews. You know, we'll be doing the same thing, with agents. We'll be looking at automated with you know, automations of of these logs of when they're having trouble, when they're getting stuck, when they're frequently going, you know, sort of awry on a certain task, and then helping, to put guardrails around that, helping to unstick them. Or maybe, you know, a good old fashioned human's gonna have to go in and help, actually solve the problem. So I do think it's a really interesting, sort of space to watch. I think for now, individual vendors like Grammarly, for instance, will be giving, you know, people ways to inspect their agents. But, actually, really, I'm excited to think about this sort of agent, you know, orchestration ecosystem and how we'll be actually looking at what's happening across the business. And that's gonna help us actually feel more confident rolling these, agents out, in more, you know, mission critical tasks, within the organization. Mhmm. On that note, actually, Tim, there's a couple of questions about, maybe people being, uncertain about, you know, hallucinations, you know. And do you think you mentioned this a little bit of of, upfront, but, there's a bunch of questions about, you know, how we'll deal with, agentic AI hallucinations. Will agentic AI hallucinate in different ways than maybe we're used to from traditional GenAI chatbots? At least for right now, agents sit on top of large language models. The hallucination happens at the model layer in what's called inference. To be technical, it's called test time. Right? And hallucinations generally occur for two reasons. By the way, a hallucination is a confident mistake. Okay? That's what it is. By the way, humans hallucinate a lot. We've hallucinated more since COVID while the machines hallucinate 90 last year over year, but I digress. So the reason that hallucinations occur is because, one, they don't have access to the right data, so they have to look at general data or quite frankly fill in the blanks. And that's where your biggest gaps occur. Or two, they confabulate. And what that means is that the reasoning clashes internally, and that's why they might make something up sort of like your grandmother says that so and so was, not allowed on a plane because they were wearing a cross. And you say that didn't happen, grandma. She said I read it on Facebook. That can happen to an AI too. So if you solve data and you have reasoning in place, you're gonna dramatically reduce hallucinations. I find that if you're choosing the right use cases where there's a little forgiveness, you'll find that very, very soon with the right data, hallucination rate will be so low, you'll hardly ever hear about it, but you'll hear a lot of happy customers that they finally got services they weren't getting. Yep. And that's a good point. Just, you know, we like to think as humans we're infallible or perfect. Yeah. But it reminds me of actually the the criticism of Wikipedia many years ago, which was, oh, look at there's there's gonna be so many mistakes in there. And then they actually did a lot of independent studies that said, actually, you know, the encyclopedia had a lot of mistakes as well. Right? We weren't perfect, when in the days of, smaller groups of editors reviewing this content. In fact, Wikipedia might be more accurate than some of the encyclopedias that we grew up with. So it's an interesting it's an it's a really good take. And it also gets down to this other question here, Tim, which I think, I'd love you to answer is, you know, should we hold off? You know, are there good reasons why an organization or maybe a use case you mentioned a little bit about this earlier with, say, billing systems is probably not a great place to start. In fact, not you know, but is there any reason why an organization should not, adopt Agentic tools? Of course, there are so many of them now available, but is there some reason why it wouldn't make sense for an organization to adopt, an Agentic AI tool? There are actually. Number one, the c suites never used AI. They just talk about it. Very dangerous because they're not gonna support you if anything goes wrong. So the we gotta get the c suite to buy in. They are the choke point at most organizations at the Agenca AI adoption. So that's number one. Number two is the company doesn't have its data act together. It doesn't even have a clean system of record, so it's gonna be very difficult for an agent to accurately operate. Now the good news, Luke, is that that's not the case at most organizations, and generative AI is getting better and better at cleaning up the data, but that will be the second reason. The third reason has to do with the use cases. Again, follow the pain where making a little mistake is not nearly as big of a deal as not being able to provide the service that the agent's gonna provide. So let that be your guide. Oftentimes, we we don't start at the business problem and work backwards. We work at what we're in love with and try to go find a solution for it. So be very careful about, as Jeff Bezos calls it, the one way doors of customer and market relationships, the things you can't walk back through. So use a two way door analysis of I can make a little mistake there, but, hey, we weren't getting to it before agents, and I think you'll be okay. Great. Alright. That rounds up. We are out of time. Thank you, Tim, so much for, all of your insights today, and thanks everybody who came. You can connect with us here, in on LinkedIn. We'd love to hear from you all directly, and so you can scan these little QR codes to connect with us. But, Tim, thank you so much for coming today. Thank you everybody, for for joining. And, any any final any final words of advice, Tim, before we sign off? Stay tuned to Grammarly. You guys put on a heck of a party. Great talking to you, Luke. Awesome. Alright, Tim. Thanks so much for being here. Take care.