• The Hidden Problem in Support: Incomplete Cases

    By John Ragsdale, SVP Marketing, Kahuna Labs

    Every support leader will acknowledge the same issue: cases are not well documented.

    Despite clear guidelines, quality reviews, and ongoing coaching, most support tickets still fail to capture everything that actually happened during the troubleshooting process. This is not a new problem, nor is it one that organizations have ignored. In fact, many have invested significant time and effort attempting to improve documentation quality. Yet the problem persists.

    The reason is simple. It is not a discipline issue. It is a structural one.

    Where Troubleshooting Actually Happens

    A support ticket rarely reflects the full reality of how an issue was diagnosed and resolved.

    In practice, troubleshooting extends well beyond what is recorded in the case management system. Engineers collaborate in Slack channels, conduct Zoom calls and remote sessions, analyze logs in backend systems, and often rely on real-time experimentation within the customer’s environment. Critical decisions are made dynamically, based on experience, context, and evolving signals.

    What ultimately gets captured in the ticket is typically a partial summary. It may include references to actions taken—such as collecting logs or scheduling a call—but often lacks the detailed narrative of what was discovered, why certain steps were taken, and how those decisions led to resolution.

    From an organizational perspective, this creates a fundamental problem. The knowledge exists, but it is not captured in a form that can be reused.

    Why Case Completeness Matters

    Case documentation is not simply an administrative requirement. It is the foundation of institutional knowledge within a support organization.

    When similar issues arise, engineers rely on historical cases to guide their approach. A well-documented case can provide insight into which questions to ask, which diagnostics to run, and which signals are most relevant in narrowing down the problem space. It can significantly accelerate time to resolution.

    However, when key steps are missing, each new case becomes a largely independent exercise. Engineers are forced to reconstruct the diagnostic process from scratch, even when similar issues have been encountered before. This leads to longer resolution times, duplicated effort, and inconsistent outcomes.

    More importantly, it prevents organizations from scaling knowledge effectively. Instead of building a cumulative understanding of how problems are solved, knowledge remains fragmented and largely dependent on individual experience.

    What the Data Shows

    At Kahuna, we analyze historical support data as part of every proof of value engagement and generate a Data Quality Report. One of the key metrics we evaluate is the Completeness Score™, which measures how thoroughly a case documents the troubleshooting process.

    Across millions of cases, the results are remarkably consistent. Only 15–20% of cases achieve a completeness score of 5 out of 5, indicating that they contain a fully documented, step-by-step resolution narrative. Nearly half of all cases score 3 out of 5 or lower, meaning they are missing critical elements required to understand how the issue was actually resolved.

    These findings highlight a significant gap between how support knowledge is assumed to be captured and how it actually exists in practice.

    Why Traditional Approaches Fall Short

    Most organizations attempt to address this issue through process improvements. They introduce stricter documentation standards, implement quality review programs, and provide coaching to support engineers.

    While these efforts are well intentioned, they do not address the underlying reality of how support work is performed. Engineers are primarily focused on resolving customer issues efficiently. Reconstructing every step of the troubleshooting process after the fact is time-consuming and often impractical, particularly in high-volume or high-pressure environments.

    Additionally, retrospective quality reviews are inherently limited. They typically cover only a small percentage of cases, and when gaps are identified, the information required to fill them may no longer be readily available. As a result, documentation remains incomplete, and the problem persists.

    From Individual Tickets to Reconstructed Knowledge

    If individual cases cannot be relied upon as complete records, then knowledge cannot be built from any single ticket. It must be reconstructed across multiple cases.

    This is where AI introduces a fundamentally different approach.

    Rather than treating support tickets as complete sources of truth, Kahuna AI augments them by reconstructing the full troubleshooting process. This involves integrating information from multiple sources, including Zoom transcripts, Slack conversations, backend diagnostic activity, Jira tickets, knowledge articles, and relevant product documentation.

    By assembling these elements, the system creates a comprehensive, step-by-step narrative of how each issue was actually diagnosed and resolved. This enriched dataset provides a far more accurate representation of real-world troubleshooting.

    Replicating Tribal Knowledge

    When augmented cases are analyzed collectively, patterns begin to emerge. Missing steps become visible, and the structure of effective troubleshooting can be identified.

    What emerges is not simply a collection of documented fixes, but a representation of how experienced engineers think. It captures the sequence of decisions, the conditions under which specific actions are taken, and the signals that guide the diagnostic process.

    This is often referred to as tribal knowledge—the implicit understanding that exists within experienced teams but is rarely documented in a structured way.

    By reconstructing and analyzing this knowledge, Kahuna AI transforms it into a dynamic Troubleshooting Map™. This map provides dynamic contextual guidance to support engineers, helping them determine the next best action based on the specific conditions of the case.

    Improving Completeness in Real Time

    In addition to reconstructing historical knowledge, Kahuna AI also improves documentation quality moving forward.

    The system calculates a Completeness Score™ in real time as engineers work on a case. When gaps are identified—such as missing details from a log analysis or an undocumented conversation—the system prompts the engineer to capture the necessary information before the case is closed.

    Organizations can also establish minimum completeness thresholds, requiring that cases meet a defined standard before closure. This approach not only improves documentation quality but also ensures that new cases continuously enhance the underlying knowledge system.

    A Shift in How Knowledge Is Defined

    For many years, support organizations have treated the ticket as the system of record. However, tickets were never designed to fully capture the complexity of real-world troubleshooting.

    They provide fragments of the process, but not the complete picture.

    The implication is significant. Support knowledge does not reside in individual tickets. It emerges from patterns across them.

    As technology environments become more complex and customer contexts more variable, the limitations of document-based knowledge systems become increasingly apparent. The future of support knowledge lies in systems that can reconstruct and interpret the full diagnostic process, rather than relying solely on static documentation.

    Conclusion

    Incomplete case documentation is not simply a quality issue. It is a structural limitation of how support knowledge has traditionally been captured.

    Attempts to improve documentation through process alone have delivered limited results because they do not align with how support work actually happens.

    The opportunity lies in a different approach: using AI to reconstruct the full troubleshooting process, capture implicit knowledge, and provide contextual guidance for future cases.

    In doing so, support organizations can move beyond fragmented documentation and toward a more scalable, accurate, and actionable model of knowledge.

  • The Knowledgebase is Dead

    By Judith Platz, Field CCO, Kahuna Labs

    “The knowledge base is dead.”

    Not “evolving.”
    Not “due for optimization.”

    Dead.

    And if that makes you uncomfortable, good, it should.

    Because most support orgs are still pouring time, budget, and headcount into a model that’s actively slowing them down.

    Let’s be honest about what’s really happening 

    You’re producing more content than ever and resolving complex issues no faster.
    That’s not a scaling problem. That’s a broken model.

    We trained engineers to search… instead of think.
    Real troubleshooting isn’t “find the right article.”
    It’s: eliminate variables, test hypotheses, adapt in real time.

    Your “single source of truth” is lying to you.
    That beautifully written article?
    It works for one version, one config, one moment in time.

    Everything else? Edge cases.

    Static knowledge has a half-life measured in hours.
    But we still run it through weeks (or months) of reviews like it’s permanent infrastructure.

    You’re ignoring your most valuable asset.
    The real knowledge isn’t in your KB.
    It’s buried in thousands of resolved cases you’re not leveraging.

    Deflection became a vanity metric.
    Congrats, you automated the easy stuff.
    Now 80% of what’s left is complex… and your model wasn’t built for that.

    AI on top of a broken process just breaks faster.
    If your foundation is static documents, all you’ve done is speed up the wrong thing.

    Here’s the uncomfortable truth:

    • Support no longer has a content problem
    • It’s a context + diagnosis problem

    And the winners right now?

    They’re not writing better articles.
    They’re building systems that:

    • Learn from every case
    • Adapt in real time
    • Deliver the next best action (not a link)

    So before you greenlight another “knowledge base refresh” initiative…

    Ask yourself:

    Are we improving documentation…or actually improving resolution?

    Because those are no longer the same thing.

    Who’s ready to challenge this?

    To dig deeper into this topic, view the OnDemand version of a recent webinar I did with John Ragsdale, as part of our “Frontline Unleashed” series. Here’s a link:

    Frontline Unleashed: Death to the Knowledgebase

  • The [Excruciating] Need for an Ensemble of Agents

    By Chaitanya Potluri, Co-Founder, Kahuna Labs, and Gurmeet Singh Manku, Co-Founder and Chief AI Officer, Kahuna Labs

    A complex ticket arrives. The customer’s production environment is failing intermittently after a routine upgrade. On the surface it looks like a configuration issue, but the symptoms are ambiguous.

    What makes this case hard is that the answer depends on the full context: the customer’s specific implementation, their environment, the patches they’ve applied, the product version they’re running, and the history of prior issues on this account.

    The relevant knowledge is scattered across several places, for example:

    • A knowledge base article covers similar symptoms but was written for an older product version.
    • A resolved ticket from eight months ago followed a completely different diagnostic path and arrived at a fix that may or may not apply here.
    • An internal note documents a compatibility edge case for this product line that was never published to the KB.
    • Configuration-specific behavior that only surfaces under certain environmental conditions was discussed in an engineering thread but never shared.

    The engineer finds the KB article in twenty minutes. The rest stays buried.

    This plays out thousands of times a day across enterprise support organizations. Every instance means a longer resolution cycle, another frustrated customer, and another senior engineer spending time rediscovering what the organization already knew.

    Troubleshooting Requires More Than “Search, Then Generate”

    As enterprises have looked to AI for help with technical support, most have reached for RAG (Retrieval-Augmented Generation), the conventional approach for grounding language models in company data. The concept is straightforward: take a question, retrieve the most relevant documents, and feed them to a language model to generate an answer.

    RAG was designed for scenarios where the question is clear and the answer lives in a well-written document. But in enterprise technical support for complex products, KB coverage is structurally thin. The combinatorial space of product versions, customer configurations, and deployment environments is too large to document comprehensively. Products evolve faster than documentation, engineers resolve cases and move on, and the knowledge accumulates in ticket histories and tribal memory. In cases where RAG has been force-fitted into technical product support, the results have been consistently disappointing. The cases that drive escalations, consume senior engineering time, and frustrate customers stubbornly remain out of reach. Enterprise troubleshooting demands something fundamentally different.

    Support tickets are noisy, multi-turn conversations where the problem statement itself evolves as the engineer and customer exchange information. The initial description of “the system is slow” could point to a network issue, a misconfiguration, a version-specific bug, resource contention, or something entirely novel. A single retrieval pass commits to one interpretation of the problem before the problem is even fully understood.

    Beyond the query problem, a standard RAG pipeline produces one ranked list of results. A compatibility note competes for the same retrieval slots as a past ticket and a KB article. The ranking is based on semantic similarity to the query, not on whether the source type is appropriate for the kind of problem being investigated. And when retrieved documents disagree, as they often do across product versions and customer configurations, the language model is left to reconcile contradictions on its own, frequently producing responses too vague to act on.

    A Glimpse into How Experts Actually Solve Problems

    When we watched a senior support engineer work through a hard case and we saw that patterns emerged. They do not follow a single retrieval path. They consult multiple knowledge sources, weigh them differently depending on the situation, and synthesize a recommendation that no single source contains on its own.

    They often start with official product documentation to establish expected behavior, then check past cases to see how similar symptoms were resolved in practice. They know past cases are noisy and incomplete, but they also know those cases contain real-world signals that documentation often lacks. They might cross-reference a compatibility matrix to rule out a known conflict, or recall an informal pattern from experience where a cluster of similar issues pointed to a root cause that was never formally documented.

    Each knowledge source has different structure, different reliability, and different relevance depending on the case at hand. The expert’s skill lies in knowing which sources to consult, how much to trust each one, and how to combine their insights into a coherent recommendation. This process can be studied, decomposed, and systematized.

    How Multi-Agent Architecture Works

    The architectural insight directly follows how experts operate. If troubleshooting requires reasoning across multiple knowledge sources with different characteristics, the system that automates it should be designed the same way.

    A multi-agent architecture deploys a collection of specialized agents, each designed to extract troubleshooting guidance from a different type of knowledge, using techniques tuned to that knowledge type’s unique structure and noise profile. These agents work in parallel on the same problem. Each contributes its best recommendation along with a confidence signal that reflects how certain it is in its output.

    A coordination layer evaluates these parallel recommendations, selects the highest-confidence outputs, resolves conflicts between them, and synthesizes a unified response that draws on multiple perspectives.

    The key properties of this design matter in production:

    • Modular. Each agent can be improved or replaced without disrupting the rest of the system.
    • Debuggable. When a recommendation is wrong, the trace shows exactly which agent contributed it and the reasoning behind it.
    • Extensible. When new knowledge sources become available, whether a new documentation set, a new class of historical data, or a new product line, a new agent can be added without re-architecting the core system.

    Extensibility, however, introduces a real engineering risk: agent sprawl. Our philosophy, at Kahuna, is that every agent must earn its place. As each one requires dedicated engineering to build, evaluate, and maintain, adding surface area to the system, and technical debt.

    Each agent needs a justification grounded in two questions: does the knowledge this agent handles have fundamentally different structure or noise characteristics from what others already cover, and what measurable fraction of problem scenarios will it address? This discipline is crucial to make sure the number of agents in the system reflects the true structure of the collective knowledge rather than it proliferating unchecked.

    Learning from Traces of Past Troubleshooting and Expertise

    One of the most valuable capabilities this architecture enables is learning from how experts have solved problems in the past. Resolved cases contain decision traces: the diagnostic paths engineers chose, the reasoning behind their steps, and the outcomes of those choices.

    It is worth noting that these traces vary enormously in quality. Some tickets are thoroughly documented with clear reasoning and confirmed resolutions. Others are incomplete, outdated, or low-credibility because the customer never responded to confirm the fix, or because the workaround was applied under time pressure and never validated.

    A case from two years ago on a deprecated version carries different weight than a well-documented resolution from last quarter on the current release.

    A multi-agent architecture can assess the quality of these decision traces and weight them accordingly. Rather than treating all past cases as equally valid retrieval candidates, the system can evaluate credibility, completeness, and relevance before incorporating historical expertise into its recommendations. The result is that the accumulated judgment of an organization’s best engineers becomes a structured, quality-weighted asset rather than an unfiltered archive.

    Implications for Support Organizations

    The practical implications of this architecture center on the quality and trustworthiness of AI-generated recommendations:

    • Measurable accuracy. Because each agent specializes and produces confidence signals, the overall system’s accuracy can be measured and improved at the level of individual agents. When a recommendation misses, the organization can identify where and why, and improve that specific agent rather than trying to tune a monolithic pipeline where failures are opaque.
    • Independent calibration. If a particular knowledge type is producing lower-quality outputs for a class of cases, it can be adjusted without destabilizing the system’s performance on other case types.
    • Transparent reasoning. Support engineers receiving AI-generated troubleshooting steps can see the reasoning behind them. This changes the dynamic from an opaque suggestion to a transparent recommendation that an engineer can evaluate, trust, and act on with confidence.
    • Pluggable growth. New product lines, new documentation sources, and new types of historical data can each be onboarded by adding specialized agents rather than re-engineering the existing system. The architecture adapts to the complexity of the knowledge landscape rather than forcing that landscape into a single pipeline.

    At Kahuna, we are building our platform around this multi-agent architecture because these properties are what it takes for AI to earn a real place in enterprise support. The quality of recommendations, the ability to trace and debug them, and the confidence that the system can be established and extended over time: these are the requirements that the prevailing single-pipeline approach cannot meet.

  • Kai: Redefining Case Deflection with an AI Support Engineer

    By John Ragsdale, SVP Marketing, Kahuna Labs

    For the past two years, Kahuna AI has focused on helping support engineers solve the hardest, most complex issues faster. We built a Troubleshooting Map™ from real ticket journeys. We enabled structured diagnostics instead of guesswork. We focused on orchestration rather than reactive firefighting.

    But along the way, we discovered something equally important.

    A meaningful percentage of support volume never required a human engineer in the first place.

    Our Data Quality Report, based on two years of historical ticket analysis, consistently shows that up to 25% of support cases are fully self-serviceable—meaning every required action was performed by the customer, and the support engineer only played a consultative role.

    Yet those cases still enter the queue. They still consume time. They still create backlog.

    Why?

    Because traditional self-service tools were never designed to troubleshoot. They were designed to retrieve information.

    Most of today’s “AI-powered” self-service solutions rely on RAG search. They synthesize help articles, surface documentation, and return links. That works reasonably well for answering questions. But resolving a technical issue—especially in a complex, configurable enterprise product—is not the same as answering a question.

    Troubleshooting requires structure. It requires context. It requires asking the right next question based on the previous answer. It requires understanding which actions are customer-doable and which require an engineer. It requires continuously evaluating whether the issue is still self-serviceable.

    Search alone cannot do that.

    And that’s why deflection rates plateau.

    What We Learned from Auto Resolve

    We’ve already proven this internally with Auto Resolve.

    When a ticket enters the system, Kahuna AI evaluates the issue against the product’s Troubleshooting Map™. If the forward-looking path shows that every required step can be performed by the customer, Kahuna AI takes ownership of the case.

    It doesn’t simply send documentation. It runs the diagnostic journey.

    It asks probing questions. It gathers required information. It recommends structured next steps. It collects inputs that an engineer would otherwise have to chase down. And if the issue evolves into something that is no longer self-serviceable, it routes the case transparently to the correct engineer.

    No wandering. No blind escalation. No wasted Level 1 cycles collecting data that should have been gathered up front.

    Auto Resolve demonstrated something critical: a significant portion of support volume doesn’t require human expertise—it requires disciplined execution of a known diagnostic path.

    That insight led directly to Kai.

    Introducing Kai: The AI Support Engineer

    Kai brings the intelligence of Auto Resolve directly to your customers.

    Instead of interacting with a chatbot that retrieves knowledge articles, your customers engage with an AI support engineer that understands your entire Troubleshooting Map™—the same intelligence that powers Kahuna Navigator and Auto Resolve.

    Kai actively troubleshoots.

    It asks multi-turn diagnostic questions in context. It evaluates the customer’s configuration, environment, product version, and history. It gathers structured inputs from your product backend using APIs and MCP servers. It determines whether the issue remains self-serviceable or requires escalation. And when human intervention is needed, it makes that transition explicit rather than letting the customer drift in frustration, or waste time researching a problem via self-service they can never solve on their own.

    Kai also remembers every issue in your historical data where customers performed all the required steps themselves. That full universe of self-serviceable scenarios becomes live, contextual intelligence—not static documentation.

    This is not another answer engine.

    It is a structured diagnostic system delivered through a conversational interface.

    Redefining What “Deflection” Means

    For years, the industry has measured deflection by how many questions can be answered without creating a ticket.

    We believe that definition is incomplete.

    Deflection should be measured by how many issues are actually resolved without engineer involvement.

    There is a meaningful difference between giving someone information and guiding them through resolution. The former reduces friction slightly. The latter eliminates work entirely.

    Kai addresses the biggest failure mode of traditional self-service: customer misdirection. Instead of sending customers into a maze of links and hoping they follow the correct path, Kai continuously evaluates:

    • Is this issue still self-serviceable?
    • Do I have enough diagnostic confidence?
    • Should a human be brought in now?

    Kai is self-aware of its confidence level and defaults to a human when appropriate. That transparency protects the customer experience and prevents the trust erosion that can happen when automation overreaches.

    The result is not just incremental improvement.

    It is a higher ceiling on deflection, lower repetitive ticket volume, and more capacity for engineers to focus on genuinely complex issues—the problems that truly require human judgment.

    Why This Matters Now

    Until now, much of our messaging has focused on solving complex tickets faster—and that remains a core strength of Kahuna AI.

    But solving complexity is only half the equation.

    The other half is eliminating the work that never should have reached an engineer in the first place.

    As AI capabilities mature, customers will expect more than search. They will expect diagnosis. They will expect guided resolution. They will expect support systems that understand context—configuration, environment, telemetry, version—not just keywords.

    Kai is the next evolution of Kahuna’s platform and the first step toward a future where AI is embedded directly into the product experience, providing proactive and reactive guidance as customers navigate your software.

    Not just answering questions.

    Driving outcomes.

    The Bottom Line

    Up to 25% of your historical support cases were already self-serviceable. Traditional chatbots cannot capture that opportunity because they were never designed to execute diagnostic journeys.

    Kai can.

    This is not about marginal improvements in search accuracy. It is about redefining what autonomous support actually looks like.

    For more information about Kai and how it can transform your self-service strategy, contact us at info@kahunalabs.ai.

  • Support Services in 2026: Finally Moving From Reactive Function to Strategic Engine

    By Judith Platz, Field CCO, Kahuna Labs

    For the last decade, support organizations have been telling themselves the same story:

    “If we just answer faster, deflect more, and close tickets cheaper, we’ll win.”

    That story is tired. And worse, it’s misleading.

    Not because speed, efficiency, and automation don’t matter. They do. But because in 2026, they no longer differentiate. Everyone can automate the obvious. Everyone can deflect the easy work. Everyone can shave seconds off response times.

    What separates average support teams from elite ones is no longer how fast they respond, it’s how intelligently they operate under complexity.

    As we head into 2026, support organizations face a stark reality:

    They will either be overwhelmed by rising complexity
    or
    They will become one of the most strategic capabilities in the enterprise.

    There is no middle ground.

    Here are my Six in ’26 predictions for where support is headed and why the shift is already underway.

    Prediction #1: Tickets Stop Being the Unit of Work

    In 2026, high-performing support organizations will stop managing work through tickets.

    Tickets are artifacts, not the work itself.

    The real work of modern support is:

    • Understanding customer context
    • Diagnosing across products, configurations, and environments
    • Coordinating expertise and action
    • Making high-confidence decisions under pressure

    The ticket is just the container. The work happens elsewhere.

    In 2026, the true unit of work becomes the decision flow, not the ticket.

    AI systems will assemble context before a human ever engages:

    • Product state and versioning
    • Customer journey and history
    • Configuration and telemetry
    • Prior resolution paths
    • Known risks and confidence levels

    Support engineers won’t start from zero.  They’ll start mid-decision.

    Implication:
    Organizations still running support through queues, handoffs, and SLA dashboards will move materially slower than competitors operating with orchestration layers designed around decisions, not tickets.

    Prediction #2: The Support Engineer Role Splits in Two

    The traditional Tier 1 / Tier 2 / Tier 3 model collapses under modern complexity.

    In its place, support engineering bifurcates into two distinct roles:

    Orchestrators

    • Navigate systems, AI insights, and decision paths
    • Manage ambiguity and customer trust
    • Decide when automation applies and when it doesn’t
    • Act as force multipliers, not task executors

    Exception Experts

    • Deep domain and product specialists
    • Handle truly novel, high-risk, high-impact cases
    • Feed learnings back into the orchestration layer

    This is not de-skilling. It’s value migration.

    Most engineers move up the value chain away from repetitive diagnostics and toward judgment, coordination, and system-level thinking.

    Implication:
    Hiring shifts away from pure technical depth toward judgment, systems thinking, and orchestration capability. Teams that keep hiring for yesterday’s role will struggle to scale.

    Prediction #3: Knowledge Bases Die; Intelligence Systems Replace Them

    Static knowledge articles cannot keep pace with:

    • Rapid product velocity
    • Massive configuration variance
    • Customer-specific environments

    In 2026, knowledge is no longer:

    • Document-based
    • Generic
    • Manually curated

    Instead, knowledge becomes:

    • Path-based, not article-based
    • Contextual, not one-size-fits-all
    • Continuously learned, not updated quarterly

    AI systems observe how problems are actually solved in the real world and then cluster, optimize, and standardize those paths.

    The most valuable knowledge is no longer what to do.

    It’s when, why, and under what conditions to do it.

    Implication:
    Organizations still asking SMEs to “write more docs” will lose institutional knowledge faster than they can capture it.

    Prediction #4: Escalations Become a Design Smell

    In 2026, frequent escalations won’t signal unavoidable complexity.

    They’ll signal poor orchestration.

    High-performing teams will treat escalations the way engineering teams treat defects:

    • Why wasn’t this path recognized earlier?
    • Why wasn’t the right expertise surfaced?
    • Why did the customer have to repeat themselves?
    • Why did confidence degrade?

    AI enables early detection of escalation patterns, long before customers feel the pain.

    Escalations don’t disappear. But they become intentional, rare, and high-value, not the default.

    Implication:
    Support leaders will no longer be praised for “handling escalations well.”
    They’ll be evaluated on how effectively they design escalations out of the system.

    Prediction #5: Support Metrics Finally Grow Up

    By 2026, serious organizations abandon vanity metrics.

    Instead of obsessing over:

    • Tickets closed
    • Average handle time
    • First response SLA

    They’ll focus on metrics that reflect real business impact:

    • Time to first meaningful action
    • Decision confidence at each step
    • Escalation avoidance rate
    • Customer effort in complex cases
    • Impact on renewals, expansion, and product quality

    Support leaders will sit at revenue and product tables with data that actually changes decisions.

    Implication:
    If your metrics don’t map to business outcomes, your function won’t be treated as strategic…no matter how efficient it is.

    Prediction #6: Support Becomes the Most Honest Signal in the Company

    Support already knows:

    • Which features don’t work in the real world
    • Where customers struggle to adopt
    • Which configurations are fragile
    • Where product intent breaks down in practice

    In 2026, winning organizations will:

    • Treat support data as strategic intelligence
    • Feed insights directly into product, success, and GTM
    • Use support as an early warning system for churn and expansion

    Support becomes the sensor network of the enterprise by detecting issues before they surface in revenue or retention metrics.

    Implication:
    Companies that ignore support insights will continue to be surprised by churn, dissatisfaction, and competitive losses even when the signals were there all along.

    The Real Choice Ahead

    The future of support isn’t about replacing people with AI.

    It’s about:

    • Replacing chaos with orchestration
    • Replacing tribal knowledge with intelligence
    • Replacing reaction with intent

    By 2026, the gap between average and elite support organizations will be enormous.

    One group will still be closing tickets.
    The other will be quietly winning customers for life.

    That’s the decade-defining shift now underway.

  • Five Predictions for Technical Support in 2026: The Gap Is About to Widen

    By John Ragsdale, SVP Marketing, Kahuna Labs

    For the past few years, many B2B support organizations have found themselves in a holding pattern with AI.

    They invested early. They piloted chatbots, summarization tools, routing automation. The results were… fine. Some incremental gains. Some lessons learned. But not enough to justify bold follow-on investment. And certainly not enough to inspire confidence when asking for more budget.

    So they paused.

    Meanwhile, a smaller set of organizations quietly kept going — not by adding more AI tools, but by rethinking what AI should actually change. They shifted focus from productivity gains to decision-making, from automation experiments to system redesign.

    As we move into 2026, the distance between these two groups is becoming impossible to ignore.

    This is not a story of “AI haves and have-nots.” Most companies have AI. The real divide is between organizations that are redesigning support around AI — and those that are still trying to bolt AI onto yesterday’s operating model.

    That gap is about to widen.

    1. AI Becomes a Competitive Weapon — Not a Tool

    For years, AI in support was framed as an efficiency project: deflect a few tickets, shorten response times, reduce cost at the margins. That framing is no longer sufficient.

    In 2026, AI increasingly becomes a source of competitive advantage.

    Support organizations that restart their AI efforts with a focus on decision-making and scale will consistently out-execute peers who stalled after early pilots failed to deliver strategic value. Over time, this advantage compounds. Better decisions lead to better outcomes. Better outcomes generate better data. Better data further improves AI performance.

    The result won’t be dramatic or sudden. It will be subtle — and relentless. Quarter after quarter, pacesetters will resolve issues faster, prevent escalations earlier, and protect customer confidence in ways others simply can’t replicate.

    2. Support Emerges as the Most Leverageable AI Surface in the Enterprise

    Among all enterprise functions, technical support is uniquely suited for advanced AI adoption.

    It combines high interaction volume, rich historical data, clear success and failure signals, and immediate business consequences when things go wrong. Every support case represents a sequence of decisions: what to ask, what to collect, what to try next, when to escalate.

    In 2026, leading organizations will increasingly focus AI investment here — not because support is simple, but because it is dense with repeatable decision patterns.

    Most support work follows known paths. When AI can reliably execute those paths, the remaining work is no longer “more tickets.” It is true exceptions: novel failures, ambiguous environments, and high-stakes customer moments.

    This is where the support engineer role fundamentally changes — from primary troubleshooter to orchestrator of an autonomous system, overseeing flows, intervening when confidence is low, and applying human judgment where it truly matters.

    3. Context Becomes the Defining Advantage — Driving In-Network AI Adoption

    One of the clearest lessons from first-generation AI efforts is that context matters more than models.

    Generic AI, loosely connected to enterprise systems, struggles with the realities of complex B2B support: unique configurations, versions, integrations, telemetry, and customer history. Without context, even accurate answers feel wrong — or risky.

    In 2026, more organizations will deploy AI inside their own networks, close to their data and systems of record. This allows AI to reason with full situational awareness — making decisions that are not just plausible, but appropriate.

    Orchestration only works when AI understands the environment in which it is operating. Without context, automation stalls and humans are forced back into every step. With it, AI can act confidently on known paths and escalate only when uncertainty truly exists.

    4. Support Becomes a Strategic Input to Product — or a Missed Opportunity

    Support has always known where products break, where customers struggle, and which issues threaten adoption. What changes in 2026 is the ability to systematically surface and operationalize that insight.

    AI can aggregate patterns across thousands of cases to identify recurring friction points, configuration risks, version-specific failures, and adoption blockers that no single team could see in isolation.

    Organizations that recognize support as a strategic intelligence function will move faster on product improvements, reduce future support demand, and improve customer outcomes. When AI handles the repeatable work, support engineers finally have the time — and the signal — to shape product priorities instead of reacting to downstream failures.

    Companies that ignore this input won’t just miss insights. They’ll continue investing in features while unknowingly shipping friction.

    5. Decision Velocity Becomes the Engine of Frontline Productivity

    For decades, support optimization has focused on productivity metrics like tickets closed, handle time, and response SLAs. Those measures still matter — but on their own, they don’t explain why customers lose confidence or why teams stall under complexity.

    In 2026, leading organizations focus on decision velocity: how quickly the system identifies the right next step and moves work forward with confidence.

    When decisions happen faster — and with greater clarity — productivity follows naturally. Backlogs shrink, escalations drop, and engineers spend less time navigating uncertainty and more time resolving real problems.

    Decision velocity doesn’t replace productivity. It’s how productivity finally scales in complex environments.

    A Final Thought

    The most important change in 2026 won’t be which AI tools companies buy.

    It will be whether leaders are willing to rethink support as a system — one designed for autonomy, context, and exception handling — or whether they continue optimizing workflows that assume humans must touch everything.

    The technology is ready.
    The data already exists.

    What’s increasingly scarce is the organizational courage to start over.

    Those who do will move faster than their competitors expect.
    Those who don’t may not realize how far behind they’ve fallen — until the gap is no longer bridgeable.

  • Proof of Value Readout

    By Hitesh Sharma, VP of Engineering, Kahuna Labs, and Chaitanya Potluri, Co-Founder, Kahuna Labs

    Kahuna Labs just completed a Proof of Value (PoV) to assess the Kahuna Platform’s potential impact on technical support cases for a recent customer. The goal was straightforward: provide AI recommendations that support engineers (and customers) actually find useful.

    Methodology & Feedback

    We trained Kahuna AI on two years of historical data to build a Troubleshooting Map™. The system was refined through “friendly feedback” sessions with the customer’s support SMEs. These sessions enabled customer-specific tuning, ensuring the AI recommendations aligned with and benefited from the customer’s internal best practices and domain knowledge.

    Customer Profile

    • Multi-product global leader in a specific class of infrastructure products
    • ~3,500 customers globally
    • Support team: 250 engineers across three tiers
    • Cost per case: $120–$500
    • Documentation: ~30–40K documents, ~80% believed to be stale (specific documents are difficult to identify)
    • Existing state:
      • Case deflection maximized using chatbots and self-service
      • Homegrown advanced AI-based knowledge search tool in place

    Limited Scope

    The PoV was limited to a single product line and trained only on past cases and centrally maintained troubleshooting documents.

    Excluded from training:

    • Case attachments
    • Zoom call transcripts (32% of cases had Zoom calls)
    • Microsoft Teams conversations
    • Jira tickets
    • Fragmented troubleshooting documents maintained locally by Support Engineers

    These data gaps will be addressed prior to a production rollout.

    Product in scope for PoV: Infrastructure product with both on-prem and cloud deployments. Case volume for the product in scope: 1,900 cases per month.

    Data Quality

    As part of the PoV, we first evaluated the quality of historical case data. Key findings by Kahuna AI:

    • 52% of cases had a Completeness Score™ of 3, 4, or 5; the rest lacked meaningful documented steps
    • 26% of cases had more than one grammar or spelling error in outbound customer communications
    • 21% of outbound messages showed minimal or no empathy
    • 15% of customer messages had moderate-to-high negative sentiment
    • 32% of cases required Zoom calls
    • 20% of outbound messages contained customer-sensitive information
    • In 25% of cases, every troubleshooting step was performed by the customer; these cases were fully self-serviceable and could have been deflected using multi-turn troubleshooting driven by the Troubleshooting Map

    Evaluation

    After training, the customer selected a representative set of cases that Kahuna had not seen before, at various stages of troubleshooting and asked Kahuna AI to generate next-step recommendations.

    Findings:

    • ~70% of Kahuna recommendations matched the Support Engineer’s eventual actions (fully or partially)
      • Low confidence recommendations would be suppressed in production
      • Expected accuracy is significantly higher after ingesting Zoom transcripts (~32% of cases), case attachments, and enabling reinforcement learning in production
    • 25% self-service potential: One-quarter of cases were identified as candidates for Kahuna Auto-Resolve, requiring no support engineer involvement
      • For remaining cases, the customer observed potential for left-shift from L3 to L2 and L2 to L1
    • Faster diagnostics: AI reduced manual back-and-forth for diagnostic data collection
      • Zoom call predictions were highly accurate and could eliminate 3–4 message exchanges for ~32% of cases
      • Meaningful first responses enabled contextual probing questions before case assignment
    • Quality communication: Suggested responses improved professionalism and empathy while reducing engineer time spent drafting emails

    Conclusion

    The readout demonstrated that Kahuna can materially improve support operations for this customer by providing troubleshooting recommendations based on a comprehensive Troubleshooting Map. The PoV showed potential to reduce resolution time, increase self-service, enable support tier left-shift, and improve consistency and quality of customer communications.

  • Why Escalations Happen—and How Predictive AI Can Prevent Them

    By John Ragsdale, SVP Marketing, Kahuna Labs

    A customer reports intermittent failures. There’s a vague error string. A screenshot. A “started happening last week.” Your team replies quickly, asks for logs, shares a few standard steps. The customer responds… slowly. They’re busy. They’re not sure where to find the right file. The thread stretches into a day, then two.

    Meanwhile, the engineer assigned to the case is doing what good engineers do: trying to reconstruct context from fragments. Skimming old cases. Checking release notes. Asking a senior teammate, “Have you seen this before?” The case isn’t “hard” yet—but it’s already drifting. Progress is measured in messages, not evidence.

    Then the calendar pressure hits. A renewal is near. A launch is blocked. Someone senior gets looped in on the customer side. And suddenly the temperature changes: “Can you escalate this?” Not because the issue is impossible–a code bug is indicated–but because confidence is gone. The customer doesn’t feel momentum. Your team doesn’t feel leverage. Everyone is operating with partial information and rising stakes.

    This is what makes escalations so maddening: they often feel like a moment, but they’re really a trajectory—set in motion early, by missed signals that were present long before the escalation email arrived.

    This is where predictive AI becomes less about “responding faster” and more about preventing the conditions that create escalations in the first place.

    The Real Reasons Escalations Happen (Beyond “It’s A Hard Issue”)

    In complex product support, escalations tend to come from a few recurring root causes:

    1) The first few steps are wrong—or delayed

    Most escalations start quietly. The initial triage misses a key diagnostic. The first response is generic. The engineer spends hours recreating context. Customers don’t escalate because the issue is complex—they escalate because they feel uncertainty and slow progress.

    2) Tribal knowledge is inaccessible when it matters

    Your best engineers carry “pattern memory” in their heads: which symptoms imply which root causes, what to ask next, what diagnostic will quickly collapse the search space. When that knowledge stays trapped in people—or buried in raw tickets—other engineers take longer, take more loops, and escalations happen more often.

    3) Customer reality is unique, and documentation can’t keep up

    Even strong knowledge bases cover only a fraction of real-world scenarios because customer environments vary significantly (configs, integrations, versions, constraints). Escalations spike when the support model assumes “one canonical flow” but reality is “a thousand variations.”

    4) Support is operating without a map

    Many teams are effectively navigating a maze without a current floor plan: fragmented tools, inconsistent ticket narratives, missing context, and no shared visibility into how problems typically evolve from first symptom to resolution.

    Signals That Predict Escalation (Often Hours or Days Earlier)

    Escalation risk usually telegraphs itself through patterns like:

    • Stalled progression: multiple back-and-forth cycles without net-new evidence (no new diagnostics, no narrowing hypotheses).
    • High “research load”: long time spent gathering context, searching past tickets, or asking internal SMEs what to do next.
    • Mismatch of actions: customer-doable steps sent as engineer-only tasks (or vice versa), creating delays and frustration.
    • Low-quality precedent: the “similar past tickets” exist, but they’re noisy, incomplete, or not aligned to the current stage of troubleshooting.
    • Version/config sensitivity: the same symptom behaves differently across versions or specific configurations—so generic “best practices” fail.

    Individually, these signals feel like normal variance. Together, they’re a pattern: this case is drifting toward escalation.

    How Predictive AI Prevents Escalations: From “Ticket Handling” to “Path Guidance”

    The most impactful shift is moving from AI that answers questions to AI that understands the troubleshooting journey.

    At Kahuna, the foundation is a Troubleshooting Map™ built from historical ticket journeys—where tickets are reconstructed into step-by-step “snapshots” and clustered into repeatable paths. That means the AI can recognize not only “what this issue is,” but what stage you’re in and what paths usually succeed from here.

    Three preventative strategies become possible:

    1) Predict escalation risk by detecting “drift” early

    When a case starts to diverge from successful historical paths—too many loops, missing diagnostics, delayed next steps—predictive alerts can trigger intervention before the customer forces it. (This is very different from simply routing “angry customers” faster.)

    2) Recommend the next best step with confidence, not guesswork

    Not all guidance deserves automation. Kahuna-style approaches use scoring—Credibility Score™, Completeness Score™, and a Complexity Score for recommended paths—so engineers can see when the system is drawing from dense, high-quality precedent versus thin, ambiguous signals.

    3) Prevent escalation by removing effort, not adding process

    When confidence is high, preventative automation can do the work that typically causes delays: auto-collect diagnostics, propose probing questions, and standardize decision flows—so the case moves forward with momentum and clarity.

    The Preventative Mindset Shift

    Escalation prevention isn’t a “new policy.” It’s a capability. The support models that win in the next era will act less like reactive firefighters and more like orchestrators—using AI to make the invisible visible: the patterns, the paths, the signals, and the next steps that keep cases from ever becoming escalations.

    The goal isn’t to eliminate every escalation. It’s to ensure escalations happen for the right reasons—true novelty and exceptions—not because the system couldn’t see what was coming.

    Escalations have a strong correlation to customer satisfaction with support, and high rates of escalation can impact likelihood of renewal. Leveraging AI to prevent escalations from happening will eliminate a lot of friction in the customer experience, and enable support to be seen as a relationship builder, not an element that drives difficult conversations (and potential cost concessions) come renewal time.

  • Frontline Productivity and the Right AI: Why Context Is the New Competitive Edge

    By John Ragsdale, SVP Marketing, Kahuna Labs

    Every so often, a piece of research captures a shift you can feel happening in the market.
    Constellation Research’s new paper, Augmenting and Accelerating Frontline Productivity, by industry veteran R “Ray” Wang, does exactly that. It’s one of those “Big Idea” moments that crystallizes what many of us have been sensing: the next leap in enterprise performance won’t come from automating the back office — it will come from empowering the frontline.

    The report frames “frontline productivity” as an emerging market category focused on increasing decision velocity—equipping frontline teams with AI that can augment judgment, accelerate actions, and improve consistency and quality at scale. Kahuna Labs is the perfect example of this category.

    The Frontline Is the New Growth Engine

    For decades, innovation has flowed from the top down. Executives got dashboards, managers got analytics, and operations got automation. But the people at the edge of the business — the service engineers, technicians, and customer-facing teams — have too often been left behind.

    That’s starting to change. Ray’s analysis makes a compelling case that AI is reshaping the structure of work itself. The old command-and-control pyramid is collapsing into what he calls the “diamond organization” — smaller teams, more autonomy, and more leverage from digital labor.

    It’s not about replacing people. It’s about giving them decision velocity: the ability to make faster, smarter, and more contextual choices at the moment of truth. And nowhere is that more critical than on the front lines — where a single decision can make or break a customer relationship.

    From Automation to Advice

    Ray’s framework for AI maturity really resonated with me: from augmentation, to acceleration, to automation, to agents, and finally to advisors.

    Most organizations are still stuck somewhere in the middle. We’ve built tools that do more — but we haven’t yet built systems that understand more. The leap from automation to advice is where the real transformation begins.

    That’s when AI stops being a tool for efficiency and starts becoming a partner in judgment. It’s when the machine isn’t just executing instructions but anticipating what a skilled human would do next — using context, history, and intent to guide decisions.

    That’s what frontline productivity in the AI era really means.

    Why Context Is Everything

    Here’s the hard truth: not all AI is capable of delivering a productivity leap.

    Legacy SaaS systems were never designed for frontline work. They live outside the organization’s network, disconnected from the data that makes decisions meaningful — things like customer configuration, product version, or the subtle differences between one client environment and another.

    As Ray puts it, “Legacy SaaS AI lacks contextual relevancy.” Without that, AI can’t deliver precision or trust.

    The future belongs to in-network AI — systems deployed inside the enterprise environment, trained on its own tribal knowledge, and fluent in its unique operating reality. These systems don’t generalize. They personalize. They reason in context.

    That’s what enables what Ray calls decision automation — AI that doesn’t just analyze, but acts, learns, and improves with every interaction.

    From Insight to Impact

    This shift has enormous implications for how we think about productivity. The goal isn’t just “doing more with less.” It’s about achieving what Constellation calls exponential efficiency — breakthroughs that are ten times faster, better, and cheaper, simultaneously.

    And it’s not a theory — we’re seeing it play out in real organizations. When frontline teams gain AI that’s context-aware, predictive, and prescriptive, they stop firefighting and start foresighting. They move from reacting to issues to preventing them.

    Most importantly, they’re free to focus on what humans do best: empathy, creativity, and problem-solving.

    The Human-AI Partnership

    In my conversations with business leaders, I often remind them that AI isn’t the end of human work — it’s the beginning of better human work.

    The question isn’t what can we automate? It’s where do we want humans to shine?

    Ray’s seven-factor model for balancing “machine scale” and “human touch” should be required reading for every executive designing next-generation services. It reminds us that the point isn’t to eliminate people from the process — it’s to elevate them within it.

    AI can manage repetition, volume, and complexity. But humans still own creativity, empathy, and trust. The organizations that thrive in this new era will be the ones that know how to orchestrate both.

    A Moment of Alignment

    Having known Ray for over twenty years, he’s rarely wrong about where the industry is heading. This paper is another example of his ability to see the future a few years early.

    For those of us working to bring AI to the front lines, it’s both validation and motivation.

    The message is clear: the future of productivity belongs to frontline workers. The companies that get there first — with AI that’s deployed in-network, context-aware, and human-centric — will define the next generation of enterprise performance.

    That’s a challenge worth rallying around. And it’s one I’m proud to be part of.

    Here’s a link to access the full report, “Augmenting and Accelerating Frontline Productivity.”

  • How to Reduce Support Backlogs Without Hiring More Engineers

    Eliminating the Quiet Tax of Backlog Management

    By John Ragsdale, SVP Marketing, Kahuna Labs

    Every support leader knows the feeling: you make real progress—then the backlog creeps right back.

    Not because ticket volume exploded overnight. Not because your support engineers suddenly got slower. But because the queue is full of a specific kind of work: long-running, context-heavy cases where the “next step” isn’t obvious, and forward motion depends on reconstructing a story from fragments.

    Those tickets carry a quiet tax. Every time someone picks one up, they start by paying it again: reread the thread, re-assemble context, re-search for precedent, re-ask the same foundational questions, re-check whether anything has changed since the last touch. It’s responsible work, but it’s also compounding work—effort spent rediscovering what the organization already knows.

    That tax rarely shows up in productivity dashboards. But it’s why backlogs persist even when teams are working at full capacity.

    Reducing backlog without hiring more engineers starts with one mindset shift: stop treating every ticket like a standalone investigation. Instead, treat your ticket history as a living body of troubleshooting journeys—so that whenever your organization learns something new, you can apply it to every similar open case and move the queue forward with less effort per ticket.

    1) Start by Attacking the Biggest Hidden Time Sink: Re-Reading and Re-Research

    In complex support environments, the real time drag isn’t writing customer communications or documenting the case. It’s the time engineers spend reconstructing context:

    • parsing long case histories,
    • hunting for “similar” tickets,
    • figuring out what stage they’re actually in,
    • and deciding what to do next.

    This is why even strong teams can feel stuck. The backlog becomes a knowledge swamp: the answers may exist somewhere in history, but they aren’t easy to find, trust, or apply quickly—especially for engineers who didn’t live the original case.

    The fastest backlog wins come from making progress cheaper: less time spent rebuilding context, more time spent advancing the case.

    2) Break Work Into “Snapshots,” Not Tickets

    Similarity search often fails because tickets are messy: multiple phases, resets, tangents, missing diagnostics, and a lot of human conversation. Comparing entire tickets to entire tickets produces “close” matches that still don’t tell you what to do next.

    A more useful unit is the snapshot: a point-in-time representation of the case—what’s known, what’s been tried, what the customer environment looks like, and what evidence is available right now.

    When you can identify the current snapshot, you can stop asking, “Have we seen this issue before?” and instead ask:

    “Have we seen this state before—and what reliably moves the case forward from here?”

    That shift alone reduces the time spent reinventing the wheel across the queue.

    3) Build Paths, Not Just a Library of Articles

    Traditional knowledge bases struggle with enterprise support because real environments vary constantly. Configurations differ. Integrations behave differently. Versions and edge cases compound. The result: static documentation covers the generic paths, while the backlog fills with everything else.

    What scales better is a Troubleshooting Map™: a set of clustered, repeatable paths that describe how issues actually progress from symptom → diagnostics → resolution.

    Instead of “here’s an article,” you get something closer to:

    • “Tickets that look like this tend to follow these paths,” and
    • “From this snapshot, these next steps usually narrow the journey fastest.”

    This is the difference between a library and a navigation system.

    4) Use Confidence Gating So AI Reduces Effort (Instead of Adding a Review Step)

    AI can help—but only if it doesn’t create extra verification work.

    A practical pattern is confidence-based guidance. Recommendations should be accompanied by clear signals about why they’re being suggested and how reliable they are—based on the quality and density of the underlying precedent.

    When confidence is high, the system can do more: propose the next best diagnostic, standardize customer questions, or automate routine evidence collection. When confidence is low, it should behave differently: flag uncertainty, suggest options, and invite human judgment. This is what Kahuna’s Confidence Score™ automates.

    Backlog reduction depends on this discipline. “AI suggestions” that engineers must validate from scratch don’t save time. Confidence-gated guidance does.

    5) The Backlog Multiplier: Apply New Paths to Every Open Ticket

    Here’s the compounding move that changes the economics of backlog work:

    Every time Kahuna identifies new snapshots and new paths on the Troubleshooting Map, that knowledge can be applied to the current queue. Instead of helping only the next ticket that comes in, new insight helps all open tickets that resemble the newly learned snapshot.

    That means you can automate the continuous scan of open/backlogged cases and:

    • identify which tickets match a newly discovered snapshot,
    • recommend the next best step,
    • auto-collect missing diagnostics where appropriate,
    • and prompt the right customer action to unblock progress.

    The impact is immediate: hundreds of hours saved that would otherwise be spent re-reading old threads and re-searching the past to see whether anything new has occurred.

    This is how backlog work stops being linear. Learning becomes a force multiplier.

    6) Make It a Closed Loop, Not a One-Time Cleanup

    “Backlog blitzes” feel good, but the impact is short lived. The queue comes back because the dysfunctional dynamics creating the backlog haven’t changed.

    The durable approach is an automated, continual process:

    1. Observe how tickets are actually solved (journeys, not just outcomes).
    2. Distill those journeys into snapshots + paths.
    3. Apply new learning to the open queue automatically.
    4. Improve as outcomes confirm (or correct) the recommended paths.

    Over time, you reduce the percentage of cases that require full ground-up investigation. Engineers spend less time rediscovering known routes and more time handling true exceptions—the work only humans can do.

    Backlogs don’t shrink sustainably by pushing teams harder. They shrink when you remove the quiet tax that keeps expensive work repeating—by turning historical troubleshooting into reusable paths, and applying every new insight across the queue the moment it appears.