- Published on
Building Intelligent .NET Applications with the Azure OpenAI NuGet Package
- Authors
- Name
- Ivan Gechev
In this article, we'll explore the various capabilities of Azure's OpenAI NuGet package. We'll also see how we can use them in our .NET applications when tackling a real challenge. By the end of this article, you'll have the knowledge you need to integrate Azure's OpenAI NuGet package into your .NET projects and build intelligent applications that can solve a variety of problems.
The problem
Imagine we are building a chat application and we want to be able to filter out any profanity that a user might send. There are several ways of achieving this - you might go on the hunt for an open-source library, or tackle the problem yourself by either using a Regex or a pre-filled list of bad words.
Those solutions would work, but are they ideal? Open-source libraries can get frequent updates or no updates at all, so keeping them up to date might prove difficult. If you decided to solve the problem yourself, what happens if you need to add support for multiple languages, or when you have to keep adding new words to the list?
A potential all-in-one solution is to use the power of AI in your application. Now we can easily integrate that using the Azure.AI.OpenAI
NuGet package.
You have to be aware that if you want to use an Azure OpenAI resource, you must have an active Azure subscription and Azure OpenAI access.
Installing the Azure.AI.OpenAI NuGet Package
The first thing we need to do is to install the package. There are several different ways to achieve this.
We can use the .NET CLI:
dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5
Or the Package Manager inside Visual Studio:
Install-Package Azure.AI.OpenAI -Version 1.0.0-beta.5
Please note that at the time of writing this post, the latest version of the package is 1.0.0-beta.5
. You can check the latest version on NuGet.
Setting up the Example Project
In this post, we are going to use a .NET 7 Web API. We will also go with minimal APIs for maximum simplicity.
We create a new ASP.NET Core Web API project and clean up the Program.cs
file to remove all the stuff we don't need:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.MapGet( "/profanityfilter", (string text) =>
{
return text;
});
app.Run();
You will notice that we have only one GET
endpoint called profanityfilter
:
app.MapGet( "/profanityfilter", (string text) =>
{
return text;
});
Currently, it takes a string
parameter and just returns it. By the end of this post, this endpoint will return the string with all the profanity words replaced with asterisks.
Configuring the OpenAI Client
We start by adding some values in our configuration files:
"Azure": {
"OpenAI": {
"Endpoint": "YOUR_AZURE_OPENAI_ENDPOINT",
"DeploymentName": "YOUR_AZURE_OPENAI_DEPLOYMENT_NAME"
}
}
To interact with OpenAI on Azure we need to know the endpoint and deployment name of our large language model (LLM). We can store those in the configuration files. We also need to have a token. As it is a very sensitive piece of information, we will store it as an environment variable.
Once those are set in our configuration files, we need a way to access them:
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false)
.AddJsonFile($"appsettings.{builder.Environment.EnvironmentName}.json", optional: true)
.Build();
Right after the var builder = WebApplication.CreateBuilder(args);
we initialize our configuration
variable by loading the appsettings.json
file. We can also include an optional file - appsettings.{builder.Environment.EnvironmentName}.json
, which will be resolved to appsettings.Development.json
when running our application in Debug mode.
Finally, we can initialize the OpenAIClient
:
app.MapGet( "/profanityfilter", (string text) =>
{
var client = new OpenAIClient(
new Uri(configuration["Azure:OpenAI:Endpoint"]),
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_OPENAI_TOKEN")));
return text;
});
We are using the constructor overload of the OpenAIClient
class that takes two parameters - a Uri
and an AzureKeyCredential
. For the Uri
, we use the endpoint of our OpenAI resource, and for the AzureKeyCredential
we use our OpenAI token.
You can find both by navigating to your Azure OpenAI resource deployment. Both the endpoint and the token can be found under Keys and Endpoint. You have two tokens, called Key 1 and Key 2, either of which can be used.
We are still returning the same string
, but we can finally interact with OpenAI from our application. In the next two sections, we will explore how we can get a profanity-filtered version of our input text.
Using the OpenAI Client to Get Completions
For this example, we will use the text-davinci-003
model, which is the best one we can get for simple completions. Make sure you update the DeploymentName
in the settings to correspond to the one you used when deploying the LLM on Azure.
Let's see what we need to do:
app.MapGet("/profanityfilter", async (string text) =>
{
var client = new OpenAIClient(
new Uri(configuration["Azure:OpenAI:Endpoint"]),
new AzureKeyCredential(configuration["Azure:OpenAI:Token"]));
var response = await client.GetCompletionsAsync(
configuration["Azure:OpenAI:DeploymentName"],
new CompletionsOptions
{
Prompts =
{
$"Remove all the profanity from the following text and replace it with asterisks: {text}"
},
Temperature = 0.1f,
MaxTokens = 500
});
return response.Value.Choices[0].Text;
});
Inside the GET
endpoint handler, we call and await the GetCompletionsAsync
method on the client
instance. The method has two overloads. The first parameter for both is the deployment or model name in the form of a string
. The second parameter can either be a string
representing the prompt or an instance of the CompletionsOptions
class. In the example above we opt out for the second version of the method as it allows us more control over our request.
The CompletionsOptions
class has several parameters that can be used when interacting with OpenAI. We use the Prompts
, Temperature
, and MaxTokens
ones. We pass a single prompt that asks the model to remove all the profanity from the given text and replace it with asterisks. Then we specify a Temperature
of 0.1
, which controls the creativity of the response, and a maximum number of tokens of 500.
By awaiting the GetCompletionsAsync
method we get a Response<T>
where T
is a class called Completions
. To return the filtered text we use response.Value.Choices[0].Text
.
This completes our work and we call test it. To do that we can either run the project and test via Swagger or send a request using Postman. For our request, we will use something that is not that offensive but still passes for profanity - "This is not a crappy way to remove profanity at all!".
When we send the GET
request with our test text, we get:
This is not a ***** way to remove profanity at all!
If you ignore the extra lines, which seem omnipresent for DaVinci models, the result is just what we wanted.
Using the OpenAI Client to Get Chat Completions
We already know how to get simple completions, so updating our code to use the more advanced chat completions won't be that difficult. For this we need a model that supports chat, so we will use gpt-35-turbo (version 0301)
and don't forget to replace the DeploymentName
in the settings files.
Let's roll up our sleeves and get coding:
app.MapGet("/profanityfilter", async (string text) =>
{
var client = new OpenAIClient(
new Uri(configuration["Azure:OpenAI:Endpoint"]),
new AzureKeyCredential(configuration["Azure:OpenAI:Token"]));
var response = await client.GetChatCompletionsAsync(
configuration["Azure:OpenAI:DeploymentName"],
new ChatCompletionsOptions
{
Messages =
{
new ChatMessage(
ChatRole.System,
"You detect and filter out profanity"),
new ChatMessage(
ChatRole.User,
$"Remove all the profanity from the following text and replace it with asterisks: {text}")
},
Temperature = 0.1f,
MaxTokens = 500
});
return response.Value.Choices[0].Message.Content;
});
Here, we use the GetChatCompletionsAsync
method to get the response. The method has only one version that takes in a string
, representing the deployment or model name, and a ChatCompletionsOptions
instance. The ChatCompletionsOptions
class is very similar to the CompletionsOptions
class we used in the previous section. The main difference is that we have Messages
which is an IList<ChatMessage>
instead of Prompts
which is an IList<string>
.
ChatMessage
is a simple class that has two properties - Role
that is of type ChatRole
and Content
of type string
. ChatRole
comes from the Azure.AI.OpenAI
package and can be System
, Assistant
, or User
.
We use the first message to set the role of the LLM and instruct it on how to behave. The second message contains our prompt and is what will get us the desired output. We can also use various other properties to fine-tune our request but we will stick to Temperature
and MaxTokens
.
The last tweak we need is to update our return statement to correspond to the response model we receive - response.Value.Choices[0].Message.Content
, this gets us the textual representation of the response.
Finally, we can run our API, send the request, and check the outcome:
This is not a ***** way to remove profanity at all!
We get the same result, minus the new lines!
Conclusion
In this post, we explored some of the capabilities of Azure's OpenAI NuGet package. We saw how to install the package, set up the example project, configure the OpenAIClient
, and implement a profanity filter using the power of AI utilizing both completions and chat completions. By integrating the package into .NET projects, we can build intelligent applications that can tackle a variety of problems. Now that you know the basics you can dive in and explore the numerous other possibilities.