ChatCompletionService for Semantic Kernel

Edit (2/26/2026) : Please note that IChatCompletionService interface works only till Semantic Kernel version 1.70.0.For versions above that, you will have to use IchatClient interface from Microsoft.Extensions.AI extension. This blog is only focused on IChatCompletionService interface.
When working with Microsoft Semantic Kernel (SK) the IChatCompletionService is the core component that enables conversational AI capabilities. It acts as the bridge between the application and large language models (LLMs) such as Azure OpenAI or OpenAI.
This article builds on the previous one where we explored how to leverage Dependency Injection in Semantic Kernel. You can read that article here .
How does it work conceptually ?
The flow is simple:
Create a Kernel
Register a Chat Completion model (Azure OpenAI / OpenAI)
Retrieve
IChatCompletionServiceSend a
ChatHistoryobjectReceive model response
Semantic Kernel manages orchestration while the model focuses on generation.
We will use the same example that we had used in the earlier article and extend it further to demonstrate the functionality of IChatCompletionService. In the previous article we used kernel.InvokeAsync to instantiate calls to the kernel functions, while here we would use the IChatCompletionService to achieve the same functionality.
As was the case in the earlier article, we will extensively use https://spectreconsole.net/ to build an interactive console application to interact through LLM’s.
To get started , we create a new C# Console application and add the following references/NuGet packages through the following .NET CLI commands.
dotnet add package Microsoft.Extensions.DependencyInjection --version 10.0.2
dotnet add package Microsoft.SemanticKernel --version 1.70.0
dotnet add Spectre.Console --version 0.54.0
dotnet add package Microsoft.SemanticKernel --version 1.70.0
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI --version 1.7.0
Next, we create a simple interface called ILightService that defines four methods TurnOnAsync TurnOffAsync IsOnAsync,UpdateRoomLights.
ILightService.cs
namespace SemanticKernelDependencyInjection
{
public interface ILightService
{
void TurnOnAsync(string room);
void TurnOffAsync(string room);
Task<bool> IsOnAsync(string room);
void UpdateRoomLights(string room, string state);
}
}
Next we define a class called LightService that implements this interface
LightService.cs
using OpenAI.Responses;
using Spectre.Console;
namespace SemanticKernelDependencyInjection
{
public class LightService : ILightService
{
public void TurnOnAsync(string room)
{
AnsiConsole.WriteLine("");
AnsiConsole.MarkupLine($"Turning [green]ON[/] lights in {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
UpdateRoomLights(room, "ON");
AnsiConsole.WriteLine("");
Thread.Sleep(1500);
AnsiConsole.MarkupLine($"Lights turned [green]ON[/] in {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
}
public void TurnOffAsync(string room)
{
AnsiConsole.WriteLine("");
AnsiConsole.MarkupLine($"Turning [red]OFF[/] lights in {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
UpdateRoomLights(room, "OFF");
AnsiConsole.WriteLine("");
Thread.Sleep(1500);
AnsiConsole.MarkupLine($"Lights turned [red]OFF[/] in {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
}
public Task<bool> IsOnAsync(string room)
{
// default OFF
Task<bool> IsON = Task.FromResult(
Program.rooms.Any(r => r.StartsWith(room) && r.EndsWith("ON")));
AnsiConsole.WriteLine("");
if (IsON.Result == true)
AnsiConsole.MarkupLine($"[green]✓[/] The lights are [green]ON[/] for room {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
else
AnsiConsole.MarkupLine($"[red]x[/] The lights are [red]OFF[/] for room {room.Split(new[] { "::" }, StringSplitOptions.None)[0]}");
Thread.Sleep(2000);
return IsON;
}
public async void UpdateRoomLights(string room, string state)
{
List<string> rooms = Program.rooms;
int index = rooms.FindIndex(r => r.StartsWith(room));
if (index >= 0)
{
Program.rooms[index] = "";
Program.rooms[index] = $"{room.Split(new[] { "::" }, StringSplitOptions.None)[0].Trim()} :: " + state;
}
}
}
}
In the Service class above, the methods TurnOffAsync and TurnOnAsync basically changes the status of the lights through the method UpdateRoomLights while the method IsOnAsyncchecks if the lights are ON .The default returned by that method is OFF.
Next, we then create a Semantic Kernel plugin that consists of the kernel functions that LLM can choose to call through the Semantic Kernel orchestration layer.
LightsPlugin.cs
using System.ComponentModel;
using Microsoft.SemanticKernel;
namespace SemanticKernelDependencyInjection
{
public class LightsPlugin
{
public readonly ILightService lightservice;
public LightsPlugin(ILightService _lightservice)
{
this.lightservice = _lightservice;
}
[KernelFunction("lights_on")]
[Description("Turns on the lights in the specified room.Room name, e.g., 'Kitchen' or 'Bedroom'")]
public async void LightsOn(string room)
{
lightservice.TurnOnAsync(room);
}
[KernelFunction("lights_off")]
[Description("Turns off the lights in the specified room.Room name, e.g., 'Kitchen' or 'Bedroom'")]
public async void LightsOff(string room)
{
lightservice.TurnOffAsync(room);
}
[KernelFunction("get_light_state")]
[Description("Get the state of light in a room")]
public async Task<string> GetLightState(string room)
{
bool isOn = await lightservice.IsOnAsync(room);
return isOn ? \("OFF" : \)"ON";
}
}
}
Now that we have the plugins built, our next step would be to build the kernel. We will have to deploy a model to Azure Foundry.
I created one and named it as “room-lights”.
You need to configure the model’s API key when registering the chat completion service IChatCompletionService that the Kernel will use before the kernel its built.
builder.AddAzureOpenAIChatCompletion(
deploymentName: "room-lights",
endpoint: "https://xxxx.openai.azure.com",
apiKey: "XXXX-Your model API key-XXXX"
);
We will then retrieve IChatCompletionService from the kernel after the kernel is built.
var chat = kernel.GetRequiredService<IChatCompletionService>();
The setup for building kernel is almost the same as in the previous article where in we used dependency injection to register LightsPlugin by creating a Singleton service and later imported into the kernel.
The major difference being that we will be using the IChatCompletionService in the kernel which wasn’t the case in the earlier article. There we had explicitly invoked the relevant kernel functions.
Now that we have the Dependency injection, ChatCompletionService, Plugins and Functions in place lets move to functional side to see everything working together.
By default we keep state of lights OFF for all the rooms.
Program.cs
using Microsoft.Extensions.AI;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using SemanticKernelDependencyInjection;
using Spectre.Console;
internal class Program
{
public static List<string> rooms = new();
private async static Task Main(string[] args)
{
rooms.Add("Living Room :: OFF");
rooms.Add("Kitchen :: OFF");
rooms.Add("Bedroom :: OFF");
rooms.Add("Bathroom :: OFF");
rooms.Add("Dining Room :: OFF");
rooms.Add("Study :: OFF");
rooms.Add("Garage :: OFF");
rooms.Add("Balcony :: OFF");
rooms.Add("Guest Room :: OFF");
rooms.Add("Store Room :: OFF");
var builder = Kernel.CreateBuilder();
builder.Services.AddSingleton<ILightService, LightService>();
builder.Services.AddSingleton<LightsPlugin>();
builder.AddAzureOpenAIChatCompletion(
deploymentName: "room-lights",
endpoint: "https://xxxx.openai.azure.com",
apiKey: "XXXX-Your model API key-XXXX"
);
Kernel kernel = builder.Build();
kernel.Plugins.AddFromObject(kernel.Services.GetRequiredService<LightsPlugin>(), "Lights");
await SetMenu(rooms);
var chat = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddUserMessage("You are responsible to turn lights on or off based on the user input and also return the state of lights of a room");
var settings = new OpenAIPromptExecutionSettings
{
ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
};
while (true)
{
AnsiConsole.Write("> ");
var userInput = Console.ReadLine();
history.AddUserMessage(userInput);
ChatMessageContent response = await chat.GetChatMessageContentAsync(
history,
executionSettings: settings,
kernel: kernel
);
history.AddAssistantMessage(response?.Content ?? "");
await SetMenu(rooms);
}
}
public async static Task SetMenu(List<string> rooms)
{
Console.Clear();
var table = new Table();
table.AddColumn("Room");
table.AddColumn("State");
foreach (var room in rooms)
{
if (room.Split(new[] { "::" }, StringSplitOptions.None)[1] == " ON")
table.AddRow(room.Split(new[] { "::" }, StringSplitOptions.None)[0], $"[green]{room.Split(new[] { "::" }, StringSplitOptions.None)[1]}[/]");
else
table.AddRow(room.Split(new[] { "::" }, StringSplitOptions.None)[0], $"[red]{room.Split(new[] { "::" }, StringSplitOptions.None)[1]}[/]");
}
AnsiConsole.Write(table);
// Console.ReadLine();
}
}
That’s all. Now go ahead and test the app.
Below is a screencast demonstrating how you can use an LLM to change or retrieve the state of the lights in a room.
Rooms that have their lights OFF their state is highlighted in red , while the rooms having their light state ON are highlighted in green.
Let me know your thoughts and comments.
Thanks for reading !!!




