Video: What High-Performing CX Teams Automate (and What They Never Will) | Duration: 3072s | Summary: What High-Performing CX Teams Automate (and What They Never Will) | Chapters: Welcome and Introduction (5.12s), Fathom's AI Capabilities (142.225s), AI-Assisted Customer Support (204.3s), Automation and Trust (464.015s), AI Automation Strategy (553.98s), AI in Customer Support (828.63s), AI Adoption Journey (1319s), AI Adoption Strategy (1609.815s), AI Integration Practices (2085.745s), AI Tool Selection (2395.485s), Staying Informed on AI (2637.18s), Data Privacy Considerations (2849.285s), Conclusion and Farewell (3023.215s)
Transcript for "What High-Performing CX Teams Automate (and What They Never Will)":
Hi, everyone. Welcome. I am Kevin, head of AI here at Front. Just a little bit of background. You know, sixteen years ago at my first startup, I worked frontline support, built that team, but then I also spent the last ten years building AI for support teams. So this intersection, support AI, one of my favorite topics. I am so thrilled to be talking to Alyssa today. She's the VP of support at Fathom, support leader I deeply admire. And, today, we're gonna be talking about how high performing CX teams automate and probably more interestingly, don't automate. And, you know, we're I'm sure many of you are dealing with this automation dilemma where there's AI hype, and you're probably feeling pressure or expectations to do more with less. And our hope is that today's conversation about how Alyssa thinks about this, will help you develop a sharper AI strategy, that properly balances efficiency and customer experience. So with that, Lisa, can you introduce yourself and share your, you know, journey to support leadership at Fathom? Absolutely. Thank you so much for having me, by the way. Really excited to chat on this topic today. But as Kevin mentioned, my name is Alyssa Medina. I'm VP of support here at Fathom AI. I've been in the customer facing world for about fifteen years, and I started as a customer success manager. I went on to lead scale customer success teams and customer implementation teams. And through that journey, I just really fell in love with helping people and helping people see real results from whatever software I was supporting, and, of course, building teams that bring that same ethos into their work. What led me towards support though and what's kept me in support all these years is really the breadth of opportunity that each conversation brings and the tangible impact that insights for my department can have on the wider organization. I'm one of those folks who really nerds out about data, and support typically has the most data, which really lends itself to representing the voice of the customer at that broad company level. A lot of companies see support as a cost center. I see it as a center of intelligence. And to bring us back to the main topic, AI has really become an invaluable tool in making that magic happen. And can you share what Fathom does and, you know, just the trajectory of Fathom as a business and the support team over the last few years? Of course. So Fathom, is the industry's best AI notetaker. So we record, transcribe, summarize your virtual meetings so you can be as present as possible in your conversations. But the real power of our tool is in conversational intelligence. So using the power of AI to analyze information across all of your content, driving workflows that empower you to do your best work. And, ultimately, we wanna sit in the middle of your AI tech stack. So powering any process you can dream of, using your company's conversational data. What I've really noticed over the past year and a half that I've been with Fathom is just how sophisticated the use cases have been getting and how much our team has had to evolve their understanding of just the AI space in general, how our users want to use this data, and just signals that tell us what product features are gonna be most impactful for us to work on next. And so, you know, kind of getting into our topic, when you hear automation and customer experience right now, what feels the most real versus overhyped? Oh, that's a great question. So to me, automation and implementing AI is really about finding that balance between where AI is best positioned to empower people and where human effort is still gonna drive the best outcomes. So we're not trying to automate people out of jobs. We're trying to use AI to help them make better decisions. I think the whole, like, replace your team with AI is what's overhyped. Like, AI should be the tool that helps your agents navigate support, but shouldn't be and, like, augments their capabilities, but shouldn't be replacing them fully. Yeah. So one of the things I think is most interesting is that support work is really complex. Right? Like, I I mean, I used to run use front line be in front line support, and so all the time, we had to be looking at different systems, making decisions. I'm curious, like, when you I found them, how do you all bucket the type of work that that support does? Yeah. So there's two buckets. There's what is safe to fully automate. Those are things that typically are more structured, low emotion, more deterministic workflows. So things with predictable inputs, predictable outputs, low emotional stakes. That's really where we're automating for speed. But on the other hand, we wanna keep humans in the loop for things that require judgment. So I consider that AI assisted versus AI driven. So, yes, we are optimizing for speed, but also human accountability. So an example of that is in front. We have AI help us draft outgoing support responses. So AI is gonna pull from our help center articles, draft the replies, etcetera. But I also rely on humans to take a look at that, make sure that it's accurate, that the tone is in the spirit of our company voice, validating, you know, edge cases that AI might not understand the nuance of. So AI just it gets us to the place where we're not starting with a blank slate, but it doesn't replace ownership. Another good example of where AI is a great assist or a great partner, but not completely replacing humans, is in voice of the customer clustering. So AI can help group themes like, hey. My Fathom bot didn't join this call, or my transcript from my call was inaccurate. But humans are in the best place to decide kinda what happens next. Is this an area of product friction? Is this a customer education opportunity? Is there a misalignment of expectations for our users around, you know, expected behavior versus actual behavior? So we're looking at AI as that tool that helps us cluster and flag signals, but then there's still the human layer on top of that to say, here's what's actually happening, and here's the opportunity for us to capitalize on to reduce friction for our end users. I think you're drawing some really interesting distinctions between, like, assistance and automation. What do teams get wrong when they try to automate too early? Yeah. Three things. One is automating before understanding the true shape of the problem. I like to think that if you haven't had your team manually handle a ticket category long enough to understand the emotional layer, the edge cases, really the true, root cause or root causes of that problem, you're probably not ready to automate that yet. And you might not actually get to a place where automation makes sense for particular use cases. For us, for example, being a meeting recorder, we'll get support tickets that say, hey. This meeting didn't record or something went wrong. Underneath that, though, there's a plethora of things that could be happening on our side, on the user side, on the meeting platform side. AI can help us kinda dig into that and understand, you know, the paths for troubleshooting. But I don't think we'll get to a place where AI will be able to understand every single scenario in a way that, like, we are confident enough that a human doesn't need to be in the loop. And that's okay. Like, that's why we have a team of fantastic support agents. The second place, which I I think I mentioned a little bit earlier, is those high emotion moments. Those moments where a customer feels anxious, confused, like, you know, the automation can read as indifference versus, you know, a place where we really wanna connect with users. So we prioritize fast human ownership in cases where, you know, there can be that friction because it's about trust at the end of the day. And over automating those moments can help protect efficiency and drive down, like, overall time to resolution, but it erodes that trust and retention. And, like, that's something that we do not want to get to. And then the last thing I will say there is, like, automating on top of messy knowledge. So, at the end of the day, AI is a computer. Like, it's that classic garbage in, garbage out sort of scenario. So, for us, you know, when we first implemented things like a chatbot, etcetera, it showed us opportunities where we really needed to tighten up our documentation, standardized terminology across our system so that the AI could then learn from that, and produce better outcomes for our users. And we look like things we're we look at things like, bot success rate and resolution rate to know that we're headed in the right direction with that documentation. So you alluded to this. Alright. First of all, that was that was really helpful. And, yeah, I think some folks, you know, feel pressure. Right? I I think there's some crazy stat about, you know, the majority of CEOs on public earnings calls will will talk about how AI is making them efficient, and then they'll turn around and tell their. teams, hey. About that, please please please get that going. So it's really useful to hear, what you all have have had to do. I'm curious how you think about, what you think is safe to fully automate, what you think should be AI assisted, and then what should just be purely human led. Right? Do you have a framework to think about this? Yeah. And coming back to some of my earlier points, I think what's fully safe to automate is, again, oh, I think we might have lost Kevin, or we. have a. slide. I'm I'm here. I'm here. Oh, fantastic. So things that are deterministic have low emotion and are publicly documented, while also keeping a path to a human. Our goal, especially with our chatbots and other user facing AI interfaces, is meet people where they're at. If you wanna try to resolve something through our chatbot, fantastic. If you wanna try to resolve something with a human, also fantastic. Our goal with the chatbot isn't necessarily bulk deflection. It's, again, meeting users where they're at and the ways that they want to interface with our team and our content. So, again, like, automating for speed, keeping humans for judgment. In terms of, like, partial automation, we wanna make sure that we make humans faster and accountable to those outcomes. So for instance, when someone comes in through our chatbot and says, hey. This this meeting didn't record or something with, you know, a similar keyword, we'll have the bot follow-up and say, was the name, date, and time of that meeting so that once that does reach the inbox, our reps can really hit the ground running and reduce the replies to resolution overall. Yeah. That makes sense. And then. what are the types of things that you're like, we will just not automate this? I mean, you alluded to earlier whether emotionally charged events, but, yeah, like, is there anything more there you wanna talk about? Yeah. We have four specific buckets that I'll I'll talk through there. So one is escalations that have risked toward revenue or our company's reputation. The second one is apology and recovery moments, and the third one is opportunities to build brand affinity and differentiate. So I guess I had three, not four. But, And. and how do you tell? I I In order to do this, practically speaking, you have to, like, know that these things are happening, right, to make sure that they they go to humans. Like, how how do you tell whether these these conditions are being met? Yeah. For things like revenue and reputational risk, we can look at things like the language and terminology that people use in order to share their feedback with our team. We've actually set up a a rule in front that uses AI to parse for words related to sentiment escalations. So we worked with our own in house AI team to come up with a prompt that helps us flag things where the user seems frustrated or they say they want to churn or, like, other signals that tell us that they are not having the best time that they could possibly have with our product. We flag those proactively using that AI tag. We have a zap set up in Zapier that, anytime that tag is applied to a message in front, it sends that message to a Slack channel so that our team leads can proactively intervene and try to get ahead of any downstream issues. As we look into the future, I would love to be able to train our chatbot to recognize some of these signals earlier. And I know that's the direction that front is headed in, so I'm really excited to be able to test that out. But, yeah, for now, we're kinda understanding a broad scope of potential sentiment escalations and trying to intervene before it gets to, you know, a point that the user is even more frustrated than than they need to be. Yeah. That that that makes sense. Thanks for sharing that example. I I am, like, for us, we're trying to make this, like, general, but I I do appreciate the the nice comments upfront. One of the interesting things that that we talked about before was that you also see customer support as a way of driving brand affinity for for Fathom, and, like, part of that is keeping it human. So can you talk a little bit about about, you know, what what Fathom does in that respect? Absolutely. So we've set up our support team to be on the receiving end of any replies to some of our automated emails. As an example, for all of our new users, after they complete their first call with Fathom, we send them an automated email from me that just asks, like, how'd it go? Any questions, feedback? Is there anything we can help with? And our support team receives all of those replies, and manually responds to each one individually. What I love about that is that's something we could so easily automate. Usually, people are saying we loved it. So it's it it would be very simple to set up an automation that just replies back and says, like, we're glad to hear it. But when that email comes from a human, it creates this moment where people have been conditioned to have really low expectations, and show them kind of what our philosophy is on what good support looks like, which is timely, effective, and empathetic. We got a lot of, like, I didn't think someone would actually reply to this emails, which shows me, that it's going a long way not only toward building brand affinity on the surface, but differentiating Fathom in the wider market. Because, again, like, people have been conditioned not to expect much from, you know, software support teams. And we just we want to show them how much we care. And, like, that's one meaningful, but, like, easy way that we we make that happen. Yeah. No. I mean, I I really love that that you are freeing up time with with some of these automations. You're freeing up time so that your team and your company can actually build relationships with with these end users. I mean, the other thing that you've told me about that I was honestly, like, kind of shocked by is that you all have the same SLAs for free users and and paid user. Yeah. Can you. talk a little bit about that and the rationale? Yeah. So we do have two different support queues, but, really, the only differentiation is the level of training that the agents in those queues have, toward the different types of products that we support. So just for context, we have a free tier of our product. We have a paid tier for individuals, and then we have two paid tiers for, companies and organizations. And we wanna make sure that everyone gets that white glove experience, whether they're on a free account or a paid account. Like, we have the same internal expectations for first reply time, ongoing reply time, and then, you know, lagging metrics like customer satisfaction. That just holds us accountable to a really high standard, helps make sure that we are recognizing themes and patterns across every single user in our customer base, and, again, just exceeding the the market expectation when it comes to support. Yeah. And and I think, clearly, your choice to do that is that you believe that that's gonna have a positive impact on, you know, the the loyalty, right, and the sentiment towards towards Fathom. Right? So Absolutely. I see it in CSAT responses all the time. People will commend us on how fast we are even though they might be a free user. They commend us on how friendly our agents are. And my favorite is when people get called out by name. And I make sure to comment on all of those individually and make sure that our agents know, I see you. This is amazing, and keep up the great work. And so, like, it's so obvious to me that you care and and your team having interacted with them. They care so much about, the the users. I'm sure, like, when you actually use AI, you care a lot about what that experience is like. Right? So I'm curious, like, what do you think about the most important guardrails, that that matter most when you all actually deploy, AI automation? Yeah. A line I've used in a few other talks is AI can't suck. And what I really mean by that is when you implement AI, whether it's user facing, whether it's meant to empower your agents, whether it's meant to help you, like, analyze mass amounts of data, it can't be worse than what a human's capable of or else it's not doing its job. So when I think about specific guardrails, it's making sure that we are queuing the system and not just our human agents. It's clear escalation triggers, and it's transparent disclosure and seamless handoff. What I mean by queuing the system is thinking about AI like a junior hire. It doesn't automatically get autonomy over talking to every single one of your users and providing every answer. It earns that autonomy over time as you see the outcomes come to life and as it builds trust not just with your users, but with your team. Like, if you're QA ing humans but not AI outputs, you've just created a blind spot for your organization. Correct. Around clear escalation triggers, that comes back to some of those things that build trust. So, like, AI should route to a human automatically when detecting, like, frustration language, billing disputes, data loss, churn intent. If it's not set up to escalate those things, that's not intelligence. That's just introducing risk into your operation. And then around the transparent disclosure, humans should know that they're talking with AI. I it is so frustrating when you're on a chat and you're like, I I can't tell if this is a human or not. So we we try to make it really clear to our users, but also, like, decrease the barrier to entry to talk to a human if that's the preference. Again, the goal of AI and support isn't fewer humans. It's higher leverage humans and meeting people where they're at. So yeah. Yeah. No. I mean, I I I really liked two of your points. One, around making it super clear when you're talking to AI versus a human, and, like, people's expectations and frustrations when that expectation is not well met or or clear is is totally avoidable. And then, yeah, the other thing is the yeah. The like like, I I what I really love about your approach is that you're able to both value caring a lot about the customer experience and and by freeing up capacity through smart automation, are able to invest in that. Yeah. So, like, if we rewind in time before. you started actually deploying AI, how did you kind of lay out that road map and and, you know, start having the the yeah. How do you decide where to start? Yeah. I'd say start with the safe areas that don't have external repercussions. So helping your agents research solutions faster, analyzing support data for a voice of the customer, things that users will feel the results of but won't be interacting with directly. And I think those are the kinds of programs that help build that initial trust in the power of AI, in the guardrails that you've set in place for AI, and gets people more comfortable with, okay. Now how do we extend this to users in a way that is thoughtful but also keeps in mind that not every user is gonna love this. AI won't be able to solve everything. And our goal is to, again, meet people where they're at in their support journey. So that's really where I would start is, you know, places where you can prove the value of adding AI into your stack and then kinda iterating from there. And so, yeah, from a sequencing, like, at at Fathom, like, how how did you actually sequence it? Like, I I know you all were stuck working on voice of customer, like, yeah, like, Yeah. almost years ago. Right? And then and then since then, you've adopted more and more. So can you talk about kind of the sequencing? Yeah. I definitely see AI adoption as a maturity curve. So you have, like, initially dipping your toes in, starting to mature those systems, and then optimizing those systems. And when I think about it here at Fathom and really any company, it exists in three different arenas that can all be in different places in that journey. The first arena being user facing AI. That's chatbots, any other systems that your users are directly interacting with. When I got to Fathom, we didn't have that. So, implementing Friends bot, especially after releasing it to a small group of users and getting initial or positive initial returns, like, that felt like an easy thing to commit to. The second arena that I think about is how do we empower our agents with AI? And we've made strides there as well. When I first got to Fathom, there was no AI tool that was specific for our support team. So we've done things like implement Notion AI as a source of research. Our entire internal knowledge base lives in Notion AI, and we've also hooked it up to a few of our Slack channels, like product releases and support questions so that our agents can drive that down that time to respond by finding the latest and greatest and understanding when things changed and really, like, nuanced things that are captured by that internal documentation. In addition to that, we've been really excited about Friend's AI features. Not sponsored. You did not tell me to say this. But things like suggested replies, again, have been such a, efficiency multiplier for our team. So they're not just starting from scratch. They're not just starting with a macro. They're starting with a dynamic source of information that they can build on top of to drive down that time to respond. And then the third arena is that voice of the customer. So how are we using AI tooling to take all of the data that we're getting through front and through a few other channels where customers are sharing feedback, like g two crab reviews, and use that use an AI tool to detect anomalies in real time, report on larger time horizons, and share that with the teams around us, and also empower ourselves and other departments to do research within that user data. And for that, we use a tool that we love called Unwrap, which does all of that in one system. Yeah. I mean, I I, am a giant, voice of customer, nerd. Right? I mean, support talks to customers more than anyone else within the organization. Support talks to customers. You know, at the moment, they try to buy and use and troubleshoot. So, like, this is this should be gold for the organization. I'm curious, like, who is consuming this type of voice of customer within the organization, what types of decisions does it actually influence? Yeah. So, everyone, ideally. So, something that I shared in my intro is that I really see support as a center of intelligence for the business. And I didn't come up with that. I heard that, said by someone many years ago, and it really just stuck with me. Because as you said, support has the most data. And my job is to show the rest of the organization that if you're not using that data to help drive decisions from how we market certain features to where our product invests in next to where people are, you know, craving more education content, the applications are really limitless. And I feel lucky to work at a company that, you know, mutually sees the value in that data, but I also see my position as kind of being an arbiter of that data and making sure that the right departments have visibility at the right times, that they are thinking about that data cohesively, and that the data that we have in support becomes this, like, nonnegotiable for. prioritization and decision making within the organization. Yeah. Yeah. I I I love that. I think it probably also, you know, selfishly for for support professionals and leaders, like, really makes support a lot more visible within the organization. Right? And and you start becoming strategic and, like you said, not a call center. Now alright. When you adopt AI and go through this transformation, there are other there are stakeholders. Right? And, you know, the two biggest stakeholders that I think would be helpful to talk about are, one, your team, right, and how how, you know, their feelings are about about adoption of AI, and then also, you know, leadership as well, right, your peers. And so let like, maybe let's start with your team. Like, how do you make sure that this adoption of AI feels like empowerment or assistance and not like a existential risk to their careers? Sure. I mean, I think we are in the midst of an AI revolution. And I feel really corny saying that, but it's true. Not only do I work in AI, but I just see the practical applications of AI in the workplace and beyond. And from my perspective, AI is no longer a buzzword or nice to have. It is a table stakes skill alongside things like empathy and troubleshooting that make people more well placed to be successful in the modern world of work. So my enabling my team to use AI is about empowering them to like, have the best outcomes, do their best work, but also, like, giving them the skills they need to be successful today and beyond in at companies like Fathom and, you know, wherever they might land next. So I I really see it as us helping them continue to learn, continue to, like, invest in their development just as a well rounded person in the software space, and something, again, that's, like, not coming for their job, but helping them do their best job. Yeah. That that makes sense. And then, like, how do you actually get buy in? Like, how how do you get feedback from your team? What role did they play in the actual adoption and choices of the the tools that y'all actually use? Yeah. So we like to move fast and iterate fast as well. Whenever there's a new AI feature we have access to, whenever there's a new tool we're considering, I like to make sure that my agent's feedback is part of that evaluation process and part of that feedback loop. I when it comes to change management, one of my core philosophies is that change should happen with people and not to people. And bringing them along for that, you know, implementing new areas where we're using AI is not just about saying, like, hey. We have this tool now. It's about saying, we have this tool. Here's how you can use it. Help me understand where you're getting value from it. Help me teach others around you where you're getting the most value out of this tool and kinda creating that working, like, list of best practices. An AI tool that's not being used is not a helpful tool. And, again, the AI can't suck if people are going to adopt it. So having that really open feedback loop, being collaborative in the expectation of the application of AI, but also, like, understanding the edge cases where maybe AI isn't as helpful as we imagined it would be and having a really open conversation about that has helped drive adoption and has helped our team really understand the power of the tools that they have access to. Yeah. I mean, there there was this, you know, MIT study that made the rounds, a few months ago about how, like, ninety five percent of AI pilots fail and don't make into production. And one of the core learnings there was that when the team that actually does the work isn't involved in, like, evaluating and deploying it, and it's more of, a top down thing, it just never works. Right? And so I think what you're saying about, you know, having the team actually get get feedback and learning it was valuable. And, also, yeah, what you were saying about this being a career accelerant for them. Right? Them being adept at using AI tools is, literacy in the in the coming years, I think, is also something that we are seeing in our support team, that more and more of the support agents are are, like, just heavy adopters of AI because they realize that whether they stay in support or they actually move on to, you know, other parts of the organization, like, this is what it takes to be a successful professional. Right? Yeah. I mean, not to be too, like, hyperbolic, but I almost feel like AI literacy is kind of at that same level as knowing how to type almost, where being able to use AI tools to your advantage to help you be as well rounded as possible, in the ways that you can approach your work, are going to make you the most successful. Yeah. And so so, you know, we talked about your team. And how about, like, on the exec side? Right? How did you kind of lay out what you're planning to do? And then, yeah, any tips on on helping, you know, the your fellow leaders understand, you know, what your strategy is, how it's going? Yeah. I mean, I think I'm in a bit of a privileged position working at a company that exists in the AI space. There's almost this expectation that every department at our company is going to be using AI in one way or another to help elevate our work. So I I'm starting a little bit ahead there. But I think the methodology that I shared a little bit earlier about, like, start with things that will impact internal outcomes that customers will feel but might not interact with is a great place to start. Once you start seeing success there, that's when you can say, like, okay. We're ready to use AI to, you know, work with our customers more directly. And kinda like the baby steps that get you to a place where you feel like you're maturing in those three areas I mentioned, the customer facing work, the agent facing interfaces, and then, you know, where that data can help through a voice of the customer program. It's gonna be different at companies where, you know, there's more fear or hesitation around AI. Mhmm. But I think starting small, proving value, and building from there is always a great place to start. Great. Cool. Alright. So just for folks, if you have q and a, please, like, leave it. I I think there's, like, a q and a tab. So, like, just leave leave questions, and then we will we'll actually have a a fair amount of time to to address it. But I guess maybe the last thing to to wrap up is just I think, yeah, you you heard a a bunch of Alyssa talk about a bunch of things. First of all, like, map out what your various workflows are, understand them, then, kind of decide which ones are safe to automate, what which are ones that can be assist accelerate, which ones really should, be escalated to humans, and then build the system so that that that routing is clean, right, and and and nothing gets dropped. And then, yeah, you know, I think the guardrails were really interesting around, you know, making sure we have good disclosure, making sure that you QA the experience so that the the same way that you would you would coach and and train a junior agent. And then, yeah, from road map perspective, starting from places where where it's, like, more internal facing and then kind of build up to the point where where where you are actually automating more of the work, right, with confidence. I couldn't have said I? said I couldn't have said it better. Yeah. Yeah. So I I was I was just trying to, like, real you know, replay replay our conversation. So, yeah, hopefully, that I mean, Alyssa, thank you so much. Yeah. We will be here to to answer some of the questions coming in, but I you just have such clarity around how you how you make these decisions that that I I thought this would be a great conversation. Yeah. Alright. So so some of the questions coming in, there's this one question around how we how you handle shadow IT or AI. And what they the the person asking about this means is that within organizations, like, people will just use AI tools. Right? They'll literally have a ChatGPT window open, and they'll, like, create like, your your team members may be just creating these workflows. And, a, did you see that happen? And then, b, what did you all do to to kind of standardize it so that you don't have wildly different experiences if someone talks to one agent versus another? That's a great question. I I haven't thought about that before, but I do know that across our organization, like, yeah, we, give our folks access to tools like ChatGPT, that we don't necessarily have insight into how they're using it, but they do have access to it through our company. I mean, I we do have guardrails around security, privacy, PII, you know, things that could expose risk at our company that we draw a hard line of, like, don't put this through AI. But I we don't have any distinct policies around how AI can and can't be applied to our work that I'm aware of. I do know that there is a current effort across our business to understand how AI is being used across each department. This was actually, one of our primary topics in a meeting last week. So our homework from that was to just come up with a list of tools and how we're using them. And I believe the outcome of that is going to be consolidation to the extent that it makes sense. So, are there overlapping processes or tooling that we could be sharing across departments that we just didn't know we didn't know? But also helping each other identify opportunities to be using AI, differently or, you know, in in ways that we hadn't thought of yet. So more to come on that. Yeah. That's that's. kinda my my mean, thoughts on. I think in a previous conversation, you also talked about how y'all really share best practices. Right? And so I think there's a way of getting ahead of that of if you know, you talked about Notion AI being connected to both your internal workflows as well as your Slack channels, right, your product update Slack channel. And, if there's just clarity that, hey. This is the best way to handle this, and and that's propagated across, like, that's probably why you're not seeing it, as much of a problem, at. on your team. Exactly. But, yeah, I definitely hear oh, go ahead. another example of that is we recently invested in Slack AI, so Slackbot's new AI features. Yeah. And one of the first things our VP or our, chief people officer did was post a thread where people can share the prompts that they've been enjoying using with Slackbot. So we got everything under the sun from what were my wins this past month and quarter to based on what you know about me, what kind of animal would I be and why? So, I mean, that's a silly example, but these are just ways that we're trying to, like, share best practices, share insights across our organization so that everyone is getting the most out of the AI tools we have access to. So there's actually a question related to this, which is, how much of your AI tooling at Fathom is purchased versus built in house? Right? And what's the rhyme and reason behind that? And, yeah, if there's anything in particular that you think it has really been helpful, either, you know, purchased or bought or or built, folks would appreciate that, especially in a startup context. Yeah. So our internal AI team is more focused on the AI in our software, versus building a suite of AI tooling for internal use cases. That doesn't mean I haven't gone to them with requests. Like, they they help me with that sentiment escalation prompt, and and things like that. But when I think about tooling, it's the trade off between, what sort of investment would it take from our AI team to bring this to life, what sort of maintenance would it require, how feasible is it to build this internally, and also what is the cost? Like, what would that take away from the time they're able to invest in making our actual product better? So when I think about tools like, Front and Unwrap that have built their whole model around being excellent at these things, it's the same reason why people choose Fathom. It's not because we wanna build this all in one does everything tool. It's because we want to be really darn good at the space, like, in the space that we exist in. Right. And that that's where I would see that differentiation. I think building a proof of concept in house can be helpful for proving initial value, especially if you have the tooling, time, and commitment to make that happen. But, otherwise, a lot of other companies think about this all day and can probably, like, those, better results for you if you were to, like, invest in them. For us, it's Notion for our internal knowledge base and Notion AI. Slack as our internal communication system, Front, of course, as our, support ticketing system, and then Unwrap as our voice of the customer, consolidation and analysis tool. Great. I I mean, I I really like what you said around especially, you know, Fathom being an AI native business, like, what does the opportunity cost? Right? Yeah. And and that is, like yeah. These days, especially with the vibe coding, you can proof of concept anything very rapidly. But in order to for something to work consistently, there's, like, a lot of engineering that goes around making that happen. And so, you know, the obviously, the proof of concept is fast, but the dedicated like, for us, you know, building front is our obsession. We. will invest way more in, like, our AI features than it would make sense for you all to do. Right? So. Yeah. there was another question around, like, if you were to start from scratch today, suppose you didn't have any AI deployed, what would be the first workflow you would start? So so, like, I guess, in in this case, it is, like, not really you answered this question in terms of, like, which strategic areas you would start, but are there, like, So, specific types of problems that your team spends a lot of time on that you would wanna start with? like, specific customer questions or specific, like, support processes? Yeah. The the question was just workflows, so interpret that as you will. I think that the agent research use case is the one I would start with. What I see at early stage companies, companies that haven't invested in AI tools to help consolidate information is that information is everywhere, not just in people's brains, but on Slack, in Google Docs, in Notion, in Front, in like, every system has access to internal knowledge. And without an easy way to consolidate that knowledge where agents have that, like, single source of truth to understand what the most current information is, what's changed, what hasn't changed, and even using AI to say, like, how do I frame this externally in a way that is understanding and empathetic? I think that would be where I would start is, like, let's make sense of potential chaos and help our agents decrease time to resolution by making that research step a lot more foolproof than it used to be in the past. Yeah. That makes sense. Here here's another question for folks starting off. Like, there's obviously a lot of noise about AgenTek AI. How do you actually stay abreast, like, be educated about what is happening so that so that you can actually evaluate the noise appropriately, right, and and make good decisions? Oh gosh. There's so much noise. And I think the fear of a lot of support leaders is, like, am I in an echo chamber? Are the decisions I'm making at my company, like, are those the same decisions that support leaders are making at companies like mine? Does my thought process translate outside of this context? It can be tough to keep up with every single, like, potential breakthrough in in the space. For me, I really like leveraging my network. So one that I'm a big fan of is support driven, which is a pretty large support network for support and support support adjacent practitioners, I like to say. And because that community is so large, people are talking about these sorts of things all the time in an effort to accomplish those goals, like validating, understanding, getting reviews of what's out there and, like, real life experiences with them, understanding the the latest and greatest coming from all of the different tooling that people might be using. I think just validating through your network is so invaluable. I also just am subscribed to a few different newsletters and just try to stay on top of what's new in AI. But similar to when I used to work in Bitcoin a very long time ago, things seem to be changing every day. And I I think that just keeping a pulse on the AI driven programs at your business, making sure that through the results and data and qualitative feedback that those loops are still working for you and the people that are using that AI, and then also taking the time to, like, validate what's happening in the market are always that you can feel like you are, you know, staying on top of what is going to be the most impactful for you with the, you know, scope of what I AI is capable of and will continue to, like, be capable of in the future. Yeah. Yeah. Plus one and support driven. I I think I joined their Slack group, like, ten years ago. And, Alyssa speaking, at the leadership summit. When is that? Next month? Oh, at the end of this month, it's, later this month. March in LA. I think there. are still. tickets left if. folks wanna. Yeah. So, at least nice right now. Alright. So very last question. There's, Yeah. the the last question is, does the company have any sensitive or protected data that you'd need to shield from AI? And, like, it have you all thought about that? So at a product level, we allow every single user to opt out of AI training. And all of the training we do kinda stays siloed in house, and that is the advantage of having an in house AI team is that the models and, you know, all of the back end work that's happening is all kind of limited to us and not being shared with external parties. So just at a product level, that is something that our users really care about, we really care about, and make it really easy for folks to opt in or out of that particular option. We also make sure that all of our agents go through security training, have a mutual understanding of what personal and private information means, basic training on things like GDPR, CCPA, other, like, privacy statutes that impact the work that we do. And then there's also just defining general guidelines and best practices. We just, like, let our team know kind of what is acceptable to plug into AI systems and where things get into a bit of more of a gray area, all of it is in the interest of protecting our users' information and privacy, and making sure that we are not putting ourselves or our users in a position to erode trust in the work that we do and how we're implementing AI internally. Yeah. Yeah. And I I think I think those those are great answers to how to deal with it in product and also from a human perspective, from a, like, suggested reply type of perspective. Actually, yes, defining what are the data sources, where are the knowledge sources they explicitly want to. pull from is important and, yeah, the type of control that that you'll want. That's a great not just. about how we train humans. It's how we configure the AI in the first place and where we draw a line on what it does or doesn't have access to. And that actually lends well to, like, one of my previous points of AI is helpful to get that initial draft in place, but there's always gonna be nuance that needs to be human led, and, like, that's that's a great example of that. Yeah. Yeah. Part I mean, I guess when you have AI generating drafts, then the risk risk is lower. Right? And I think depending on the business, that may be the right place to go as opposed to full automation, and it just kind of depends on, yeah, the problem space, the business. Well, Alyssa, thank you so much for spending time with us answering all of these questions. And, yeah, if y'all are interested in just, a lightning fast meeting recorder, Fathom is incredible. Also, I was not paid to say this, but but I am always impressed. Okay. Alright. Well, thank you, everyone. Hope this was helpful, and we'll see you next time. Thank you, Kevin. I had a blast. Bye. Bye bye.