Umbraco has taken a thoughtful approach to AI. Rather than baking it into the core CMS where everyone gets it whether they want it or not, they've made it entirely modular and opt-in. You choose which AI features you want, which provider powers them, and you keep full control over how your data is handled. No vendor lock-in, no hidden costs, no surprises.
In this post, I'll walk through what Umbraco AI offers out of the box, how to get it up and running quickly, and then show how you can build your own custom AI features on top of the same foundation. We'll finish with a practical example: an AI-powered log analyser that hooks into the Umbraco backoffice log viewer.
What is Umbraco AI?
Umbraco AI is a set of modular NuGet packages that bring AI capabilities into the Umbraco backoffice. It's built on top of Microsoft.Extensions.AI, which means it slots into the standard .NET dependency injection pipeline and works with any AI provider that has a compatible integration.
The architecture breaks down into a few key layers:
Umbraco.AI - the core package that everything else depends on. This handles connections, profiles, and the chat service abstraction.
Umbraco.AI.Prompt - a prompt template system that lets editors run pre-defined AI actions on content fields.
Umbraco.AI.Agent - the agent runtime that powers conversational AI assistants.
Umbraco.AI.Agent.Copilot - a chat sidebar UI that editors can use in the content and media sections.
Provider packages - one for each supported AI provider (OpenAI, Anthropic, Google Gemini, Amazon Bedrock, and Microsoft AI Foundry).
You only install what you need. Want prompts but not the copilot? Just install the prompt package. Want to use Anthropic instead of OpenAI? Swap the provider package. The rest of your configuration stays the same.
Getting Started: The Kitchen Sink Install
The fastest way to see everything Umbraco AI can do is Matt Brailsford's kitchen sink install script. It sets up a fresh Umbraco site with every AI package installed and pre-configured seed data so you can start experimenting immediately.
The script installs:
dotnet add package Umbraco.AI dotnet add package Umbraco.AI.Prompt dotnet add package Umbraco.AI.Agent --prerelease dotnet add package Umbraco.AI.Agent.Copilot --prerelease dotnet add package Umbraco.AI.OpenAI dotnet add package Umbraco.AI.Anthropic dotnet add package Umbraco.AI.Google dotnet add package Umbraco.AI.Amazon dotnet add package Umbraco.AI.MicrosoftFoundry
It also installs the Clean starter kit so you have some content to work with straight away.
Once installed and running, you'll find a new AI section in the Umbraco backoffice settings where you can configure everything. The seed data gives you a head start with a pre-configured connection, profile, prompts, and agents - you just need to add your API key.
Core Concepts
Before diving into features, it's worth understanding the three foundational concepts that underpin everything in Umbraco AI: Connections, Profiles, and Context.
Connections (Providers)
A connection is your link to an AI provider. Each provider package you install (OpenAI, Anthropic, Google, etc.) registers itself as an available connection type. When you create a connection, you choose the provider and supply your API credentials - typically an API key.
You can set up multiple connections. Perhaps you want OpenAI for general content work and Anthropic for something else. Each connection is independent, and you pay the AI providers directly - Umbraco doesn't sit in the middle or add any markup.
Context
Context resources are pieces of background information that get injected into every AI request that uses the profile they're attached to. This is where governance really shines. You can define your brand voice, tone guidelines, target audience, terminology to use (or avoid), and style rules - all in one place.
When an editor runs a prompt or chats with the copilot, these context rules are automatically applied. This means your AI outputs stay consistent with your brand, regardless of which editor is using it or what they're asking for.
Profiles
A profile brings together a connection with specific model settings. This is where you choose which model to use (e.g. GPT-4o, Claude, Gemini), set the temperature, and attach any context resources. Think of a profile as a reusable configuration that defines how AI should behave for a particular purpose.
The kitchen sink install creates a default profile that uses OpenAI with GPT-4o and links it to a brand voice context. You can create as many profiles as you need - perhaps a creative one with higher temperature for marketing copy, and a precise one with lower temperature for technical documentation.
Prompts in Action
The prompt package adds AI-powered actions directly into the content editor. When editing a text field, editors get access to prompt actions that can generate or transform content in context.
The kitchen sink install seeds two prompts to get you started:
SEO Description
This prompt generates meta descriptions for your content. It takes the content of a page and produces a concise 150-160 character description optimised for search engines. Editors can run it as a property action on any short text field, review the suggestions, and pick the one that works best.
Summarise
The summarise prompt condenses longer content into a concise paragraph. It generates multiple options so the editor can choose the summary that best captures the key points. This is useful for creating excerpts, social media posts, or newsletter teasers from existing content.
Both prompts respect whatever context and profile settings you've configured, so the output automatically aligns with your brand voice and style guidelines.
Agents and the Copilot
Agents take things a step further. While prompts are one-shot actions (you run them, get a result, done), agents are conversational. They can have an ongoing dialogue with the editor, understand the context of what's being worked on, and help with more complex tasks.
The kitchen sink install creates three agents:
Content Assistant - a general-purpose helper for content creation and editing tasks
Media Assistant - focused on generating alt text, captions, and media descriptions
Legal Specialist - helps draft legal and compliance-related content
These agents are accessed through the Copilot - a chat sidebar that appears in the content and media sections of the backoffice. Editors can open the copilot, choose an agent, and have a back-and-forth conversation while they work. The agent is aware of the content being edited, so it can make suggestions that are relevant to the current page.
The copilot keeps the conversation contextual. If you're editing a product page and ask the content assistant to "write an introduction", it already knows what product you're working on. This makes the interaction feel natural rather than having to re-explain the context every time.
Building Custom Features: AI Log Analysis
Everything I've covered so far is what you get out of the box. But the real power of Umbraco AI's architecture is that the same services are available to you as a developer. You can build your own AI-powered features using the exact same abstraction layer.
To demonstrate this, I built an AI log analyser that adds a button to each row in the Umbraco backoffice log viewer. When you click it, a modal dialog opens and sends the log entry to an AI model for analysis. The AI returns a plain-language summary of what happened, the likely cause (for warnings and errors), and a suggested next step.
The Key: IAIChatService
At the heart of this is IAIChatService - a service provided by the core Umbraco.AI package. This is the same service that powers prompts, agents, and the copilot under the hood. It handles the communication with whichever AI provider you've configured, so your code doesn't need to know or care whether it's talking to OpenAI, Anthropic, Google, or anything else.
Here's the controller that handles the log analysis:
[ApiVersion("1.0")]
[VersionedApiBackOfficeRoute("log-ai")]
[Authorize(Policy = AuthorizationPolicies.BackOfficeAccess)]
public class LogAiSummaryController : ManagementApiControllerBase
{
private readonly IAIChatService _chatService;
private readonly ISystemDiagnosticsProvider _diagnostics;
public LogAiSummaryController(
IAIChatService chatService,
ISystemDiagnosticsProvider diagnostics)
{
_chatService = chatService;
_diagnostics = diagnostics;
}
[HttpPost("summarise")]
public async Task<IActionResult> Summarise(
[FromBody] LogSummaryRequest request,
CancellationToken ct)
{
if (string.IsNullOrWhiteSpace(request.Message))
return BadRequest("Message is required.");
var prompt = $"""
Please analyse this application log entry and provide:
1. A plain-language summary of what happened
2. The potential cause (if it's a warning or error)
3. A suggested action or next step (if applicable)
Keep your response concise and actionable. Use markdown formatting.
--- System context ---
{_diagnostics.GetContext()}
--- Log entry ---
Level: {request.Level}
Timestamp: {request.Timestamp}
Message: {request.Message}
{(string.IsNullOrWhiteSpace(request.Exception)
? "" : $"Exception: {request.Exception}")}
{(string.IsNullOrWhiteSpace(request.Properties)
? "" : $"Properties: {request.Properties}")}
""";
var messages = new List<ChatMessage>
{
new(ChatRole.System,
"You are a helpful assistant that analyses application " +
"log messages for Umbraco CMS developers. Be concise, " +
"technical where appropriate, and focus on actionable insights."),
new(ChatRole.User, prompt)
};
var response = await _chatService.GetChatResponseAsync(
messages, cancellationToken: ct);
return Ok(new LogSummaryResponse
{
Summary = response.Text ?? "No summary available."
});
}
}The important thing to notice here is what's not in this code. There's no reference to OpenAI, no Anthropic SDK, no provider-specific configuration. The IAIChatService is injected through standard .NET dependency injection, and it uses whatever connection and profile you've configured in the Umbraco backoffice. If you switch from OpenAI to Anthropic tomorrow, this code doesn't change at all.
Enriching Context with System Diagnostics
To give the AI the best chance of providing useful analysis, we also inject system context into every request. A SystemDiagnosticsProvider service gathers information about the running environment once at startup and caches it:
public class SystemDiagnosticsProvider : ISystemDiagnosticsProvider
{
private readonly Lazy<string> _context;
public SystemDiagnosticsProvider(
IUmbracoVersion umbracoVersion,
IRuntimeState runtimeState,
IHostEnvironment hostEnvironment)
{
_context = new Lazy<string>(() =>
BuildContext(umbracoVersion, runtimeState, hostEnvironment));
}
public string GetContext() => _context.Value;
private static string BuildContext(
IUmbracoVersion umbracoVersion,
IRuntimeState runtimeState,
IHostEnvironment hostEnvironment)
{
var sb = new StringBuilder();
sb.AppendLine($"Umbraco: {umbracoVersion.SemanticVersion}");
sb.AppendLine($".NET: {RuntimeInformation.FrameworkDescription}");
sb.AppendLine($"OS: {RuntimeInformation.OSDescription}");
sb.AppendLine($"Environment: {hostEnvironment.EnvironmentName}");
sb.AppendLine($"Runtime mode: {runtimeState.Level}");
sb.AppendLine("Assemblies:");
var assemblies = AppDomain.CurrentDomain.GetAssemblies()
.Where(a => !a.IsDynamic)
// ... filtered to exclude framework assemblies
.OrderBy(a => a.Name);
foreach (var (name, version) in assemblies)
sb.AppendLine($" {name} {version}");
return sb.ToString();
}
}This means when the AI analyses a log entry, it knows the Umbraco version, .NET runtime, operating system, environment name, runtime mode, and every non-framework assembly loaded in the application. This gives it far better context for diagnosing version-specific issues, package conflicts, or environment-related problems.
Provider Agnostic by Design
This is the real takeaway. Because Umbraco AI is built on Microsoft.Extensions.AI and exposes its functionality through clean service abstractions like IAIChatService, any custom feature you build automatically inherits the same provider flexibility as the built-in features. Your log analyser, your custom content generator, your automated tagging system - whatever you build - works with any AI provider. The administrator configures the provider once in the backoffice, and everything just works.
You're not writing OpenAI code or Anthropic code. You're writing Umbraco AI code, and the provider is just a configuration choice.
Wrapping Up
Umbraco AI strikes a good balance between power and control. The built-in prompts and copilot cover the most common editorial use cases, while the underlying service layer makes it straightforward to build your own AI-powered features. The provider-agnostic architecture means you're never locked in, and the governance features (context, profiles, connections) ensure that AI outputs stay consistent and on-brand.
If you want to try it out, the kitchen sink install is the fastest path. Get it running, add your API key, and start exploring. Once you've seen what the built-in features do, you'll have a good sense of how to extend it for your own needs.
The source code for the log analyser demo will be available on GitHub soon.