Skip to main content

This package was my first Umbraco community package, and I built it at the Umbraco Spark Hackathon. I actually built the core functionality first in a standalone Umbraco website project using Claude Code - Anthropic's CLI coding agent - which let me iterate quickly and get the AI analysis working end-to-end without worrying about packaging. Once I was happy with how it worked, I scaffolded a proper package project using Lotte Pitcher's Opinionated Package Starter template, which gave me the project structure, build pipeline, NuGet packaging, and a test site already wired up. Then it was just a case of moving the code across into the package structure.

The result is AI.LogAnalyser: a package that adds AI-powered analysis to the Umbraco backoffice log viewer. You click a button, and AI tells you what went wrong, why, and what to do about it.

AI Log Analysis modal showing a summary, cause and recommended action for an error log entry

AI Log Analysis modal showing a summary, cause and recommended action for an error log entry

The Problem

If you've ever stared at a wall of log entries in the Umbraco backoffice trying to figure out why something broke, you know the pain. Stack traces are long, error messages are cryptic, and the answer often requires cross-referencing Umbraco documentation, GitHub issues, or forum posts. I wanted to shortcut that process by bringing AI directly into the log viewer.

What It Does

The package adds an AI column to the log viewer search results table. Each row gets a small icon button - click it, and a modal dialog opens with an AI-generated analysis of that specific log entry. The analysis is structured into three sections:

  • Summary - A plain-language explanation of what happened

  • Cause - The likely root cause, identifying the originating method if a stack trace is present

  • Recommended action - A concrete next step: config changes, code fixes, or documentation references

For informational or debug entries, the response is kept brief. For errors and exceptions, it digs deeper into common Umbraco issues like composition failures, Examine indexing problems, dependency injection errors, and database connectivity.

How the Frontend Works

The Umbraco v17 backoffice is built with Lit web components and a shadow DOM architecture. That makes enhancing the existing UI a bit tricky - you can't just querySelector from the document root and expect to find elements nested inside shadow roots.

The package registers a backofficeEntryPoint that runs a LogViewerEnhancer class. This class polls every second, checking if the user is on the log viewer page. When it finds the umb-log-viewer-messages-list element, it traverses into its shadow DOM to:

  1. Add an "AI" header to the table

  2. Inject an AI icon button into each log row's <summary> element

When a user clicks the button, the enhancer extracts the log data directly from the element's properties - timestamp, level, rendered message, message template, exception, and structured properties - and opens a modal dialog.

I used polling rather than MutationObserver because mutation observers don't fire for changes inside shadow roots. A WeakSet tracks which rows have already been enhanced to avoid duplicating buttons, and the messages list element is cached to keep the per-tick cost minimal.

What Gets Sent to the AI

When the modal opens, it makes a POST request to the package's backoffice API endpoint. The request body contains everything we know about the log entry:

{
  "level": "Error",
  "timestamp": "2026-03-20T10:15:32.123+00:00",
  "message": "Failed to resolve content by route '/about'",
  "messageTemplate": "Failed to resolve content by route '{Route}'",
  "exception": "System.InvalidOperationException: ...",
  "properties": "Route: /about\nContentId: 1234"
}

On the server side, the controller builds a structured prompt from this data. But it doesn't just send the log message - it includes a lot of additional context to help the AI produce better analysis.

System Diagnostics

A SystemDiagnosticsProvider service collects environment information that doesn't change during the application's lifetime:

  • Umbraco version

  • .NET runtime version

  • Operating system

  • Environment name (Development, Staging, Production)

  • Runtime mode

  • Database provider (SQL Server or SQLite, inferred from the connection string or explicit config)

  • ModelsBuilder mode (e.g. InMemoryAuto, SourceCodeManual)

  • Hosting model (Azure App Service, Docker/Container, IIS, IIS Express, or Kestrel - detected via environment variables and process name)

  • Application start time

  • A filtered list of loaded assemblies and their versions (excluding standard .NET/Microsoft framework assemblies)

This information is built once and cached using Lazy<string>, so there's no overhead on subsequent requests.

Surrounding Log Context

A single log entry in isolation often isn't enough to diagnose an issue. The LogContextProvider service fetches surrounding log entries using Umbraco's ILogViewerService. It runs both "before" and "after" queries in parallel for speed, filters out the selected entry itself, and deduplicates consecutive identical messages (collapsing them into a single entry with a count like (x47)). This gives the AI a clear timeline of what led up to the error and what happened afterwards.

Error Frequency

The controller also counts how many times the exact same log message appeared in a configurable time window (default: the last hour). If the message appeared more than once, the AI is told something like "This exact message appeared 47 times in the last hour. This is a recurring/systemic issue." This helps the AI distinguish between a one-off glitch and a systemic problem, and adjust its recommendations accordingly.

Both the surrounding log context and frequency checks run in parallel with a single Task.WhenAll to minimise latency.

Safety: Field Truncation

Large stack traces or serialised property bags could produce oversized prompts. The controller truncates the Exception and Properties fields to 8 KB each before including them in the prompt, appending a [truncated] marker so the AI knows the full text was cut.

Performance Diagnostics

The controller logs timing information to the Umbraco log at Information level so you can monitor how the package is performing:

  • Context gathering: How long it took to fetch surrounding logs and error frequency, plus how many entries were found

  • AI response: How long the AI provider took to respond, the total end-to-end request time, and the prompt length in characters

  • Failures: Logged at Error level with the elapsed time before failure

These entries appear in the log viewer under the AI.LogAnalyser.Controllers.AILogAnalyserApiController source, so you can easily filter for them.

The AI Prompt

The prompt is carefully structured to produce consistent, useful output. There's a system message that sets the persona:

You are an expert Umbraco CMS diagnostics assistant. You have deep knowledge of Umbraco's architecture, common error patterns, Serilog structured logging, .NET middleware pipelines, and dependency injection. Analyse log entries for Umbraco developers. Be concise, technical, and actionable. Use the message template (if provided) to understand the structured logging intent behind the rendered message.

The user message then includes the formatted log entry, error frequency note, surrounding log context, system diagnostics, and explicit instructions to structure the response with Summary, Cause, and Recommended action headings. The response is constrained to 2-4 short paragraphs to keep things digestible.

Mentioning the message template in the system prompt is a deliberate choice - Serilog message templates like "Failed to resolve content by route '{Route}'" carry semantic meaning that the rendered message alone doesn't convey. The AI can use this to understand that Route is a variable, not a hardcoded path.

Using Umbraco.AI

One of the things that made this package possible in a hackathon timeframe is Umbraco.AI. Rather than coupling to a specific AI provider, the package depends on the IAIChatService abstraction from Umbraco.AI.Core. This means it works with any provider the site has configured - OpenAI, Anthropic, Google, Amazon Bedrock, or Microsoft AI Foundry.

The actual AI call is straightforward. The controller takes the log entry, fetches surrounding log entries and error frequency in parallel, and builds a structured prompt with system context:

[ApiVersion("1.0")]
[ApiExplorerSettings(GroupName = "AI.LogAnalyser")]
public class AILogAnalyserApiController : AILogAnalyserApiControllerBase
{
    private const int MaxFieldLength = 8192;

    private readonly IAIChatService _chatService;
    private readonly ISystemDiagnosticsProvider _diagnostics;
    private readonly ILogContextProvider _logContext;
    private readonly ILogger<AILogAnalyserApiController> _logger;

    // ...constructor omitted for brevity...

    [HttpPost("analyse")]
    [MapToApiVersion("1.0")]
    public async Task<IActionResult> Analyse(
        [FromBody] LogAnalyserRequest request, CancellationToken ct)
    {
        var totalStopwatch = Stopwatch.StartNew();

        // ...build logEntry from request fields,
        //    truncating Exception & Properties to MaxFieldLength...

        // Fetch surrounding log entries and error frequency in parallel
        var contextStopwatch = Stopwatch.StartNew();
        var contextTask = _logContext
            .GetSurroundingLogsAsync(timestamp, request.Message, ct);
        var frequencyTask = _logContext
            .GetErrorFrequencyAsync(timestamp, request.Message, ct);
        await Task.WhenAll(contextTask, frequencyTask);
        var contextMs = contextStopwatch.ElapsedMilliseconds;

        var context = await contextTask;
        surroundingContext = FormatSurroundingLogs(context);
        var contextEntryCount = context.Before.Count + context.After.Count;

        var frequencyCount = await frequencyTask;
        // count > 1 → include frequency note in prompt

        if (contextMs > 0 || contextEntryCount > 0 || frequencyCount > 0)
        {
            _logger.LogInformation(
                "AI Log Analyser: Context gathered in {ContextMs}ms "
                + "({ContextEntryCount} surrounding entries, "
                + "frequency count: {FrequencyCount})",
                contextMs, contextEntryCount, frequencyCount);
        }

        var prompt = $"""
            ...structured prompt with headings...

            Log entry to analyse:
            ```
            {logEntry}
            ```
            {frequencyNote}
            {surroundingContext}
            System context:
            ```
            {_diagnostics.GetContext()}
            ```
            """;

        // ...build ChatMessage list with system + user prompt...

        try
        {
            var aiStopwatch = Stopwatch.StartNew();
            var response = await _chatService
                .GetChatResponseAsync(messages, cancellationToken: ct);
            var aiMs = aiStopwatch.ElapsedMilliseconds;
            var totalMs = totalStopwatch.ElapsedMilliseconds;

            _logger.LogInformation(
                "AI Log Analyser: AI response received in {AiMs}ms "
                + "(total request: {TotalMs}ms, "
                + "prompt length: {PromptLength} chars)",
                aiMs, totalMs, prompt.Length);

            return Ok(new LogAnalyserResponse
            {
                Summary = response.Text ?? "No summary available."
            });
        }
        catch (Exception ex)
        {
            _logger.LogError(ex,
                "AI Log Analyser: AI analysis failed after {TotalMs}ms",
                totalStopwatch.ElapsedMilliseconds);
            return StatusCode(StatusCodes.Status502BadGateway,
                "AI provider unavailable.");
        }
    }
}

The base controller handles routing and authorization - the endpoint sits at /umbraco/ailoganalyser/api/v1.0/analyse and requires backoffice settings section access. The IAIChatService does all the heavy lifting of communicating with whichever AI provider is configured.

If the AI provider is unavailable or returns an error, the controller catches the exception and returns a 502 Bad Gateway with a helpful message rather than letting the error propagate.

What the AI Returns

The response comes back as markdown, which the frontend parses with the marked library and renders inside the modal dialog. A typical response for an error might look like:

Summary

The content routing pipeline failed to resolve a published content node for the URL path /about. This typically occurs when the content exists in the backoffice but is either unpublished, in the recycle bin, or has a different URL segment than expected.

Cause

The InvalidOperationException originates from PublishedRouter.FindContent(). Given the message template uses {Route} as a parameter, this is a dynamic route lookup failure - not a hardcoded path. The most common cause is content that was unpublished or moved after the URL was cached or referenced.

Recommended action

Check the content tree for a node with URL segment about. Verify it is published and not in the recycle bin. If the content was recently moved, rebuild the published cache via Settings > Published Status > Rebuild Database Cache. If the issue persists, check for custom IContentFinder implementations that may be interfering with route resolution.

The modal also displays the log level as a colour-coded badge (red for errors, orange for warnings, green for info), the timestamp, and the original log message for reference. There's a "Re-analyse" button if you want a fresh take.

Wiring It All Up

The package uses Umbraco's IComposer pattern for dependency injection. The composer binds the configuration settings, registers the services, and configures Swagger documentation for the API:

public class AILogAnalyserApiComposer : IComposer
{
    public void Compose(IUmbracoBuilder builder)
    {
        builder.Services.Configure<LogContextSettings>(
            builder.Config.GetSection(LogContextSettings.SectionName));

        builder.Services.AddSingleton<ISystemDiagnosticsProvider,
            SystemDiagnosticsProvider>();
        builder.Services.AddTransient<ILogContextProvider,
            LogContextProvider>();

        builder.Services.Configure<SwaggerGenOptions>(opt =>
        {
            opt.SwaggerDoc(Constants.ApiName, new OpenApiInfo
            {
                Title = "AILog Analyser Backoffice API",
                Version = "1.0",
            });
        });
    }
}

The LogContextSettings class defines the configurable options with their defaults:

public class LogContextSettings
{
    public const string SectionName = "AILogAnalyser:LogContext";

    public int MaxSurroundingEntries { get; set; } = 10;
    public int SurroundingWindowMinutes { get; set; } = 5;
    public int FrequencyMaxScan { get; set; } = 500;
    public int FrequencyWindowMinutes { get; set; } = 60;
}

These are bound from appsettings.json using the standard IOptions<T> pattern. If the section is missing, the defaults are used. You can customise them like this:

{
  "AILogAnalyser": {
    "LogContext": {
      "MaxSurroundingEntries": 20,
      "SurroundingWindowMinutes": 10,
      "FrequencyMaxScan": 1000,
      "FrequencyWindowMinutes": 120
    }
  }
}

On the frontend, the entry point registers the modal manifest with Umbraco's extension registry and starts the log viewer enhancer. The modal component itself is lazy-loaded - it only downloads when a user actually clicks the AI button.

Getting Started

Install the package:

dotnet add package Umbraco.Community.AI.LogAnalyser

Add an AI provider (e.g. OpenAI):

dotnet add package Umbraco.AI.OpenAI

Configure the provider in appsettings.json per the Umbraco.AI documentation, then navigate to Settings > Log Viewer > Search and click the AI icon on any log entry.

The package is MIT licensed and open source on GitHub. If you'd like to scaffold your own Umbraco package, I'd highly recommend Lotte's starter template - it takes a lot of the friction out of getting started. Contributions are welcome!