Using key vault secrets locally

Have your application connect to a key vault while developing locally

About a year ago I wrote a blog article explaining how you can use key vault secrets when developing locally. Microsoft’s documentation suggests that you just use a secrets file and use key vault when running the app in the cloud. That seems sensible but I kinda like to know everything is working properly, including the connection to the key vault before I try to run the app directly from Azure.

The solution, which I cannot take credit for, worked well. But there’s a simpler method and one that I stumbled upon when researching how to make this work with Docker containers and Kubernetes.

In some situations you can use the building in authentication capability in Visual Studio. There’s situations that never properly worked which is where I fell back to this solution.

the Solution

Enter the DefaultAzureCredential which comes with the Azure.Identity library.

Consider the following scenario, during bootstrapping, my app tries to connect to Key vault in order to get secrets. So, inside the CreateHostBuilder method of the Program class, I create a secrets client and then add that to the webBuilder:

var secretClient = new SecretClient(new Uri("keyvaultUri"), credential);
webBuilder.AddAzureKeyVault(secretClient, new KeyVaultSecretManager());

Obviously, I need a credential and with the Azure.Identity SDK, this is very easy:

var credential = new DefaultAzureCredential();

And this is pretty much it. This will work in an app service (with a managed identity), and will work in a Kubernetes pod (provided there’s an identity to use. Different topic).

In fact, the DefaultAzureCredential will look for multiple different options and will use the first one that works. When working locally, you have a couple of options to provide a credential that will work:

  • grant your Azure login the right permissions to key vault and then set those permissions directly in visual Studio (options - Azure Service Authentication)
  • instead, you can use those credentials and sign in to an Azure CLI session. These will be used when running locally.

Nice, this works well when using Kestrel and when your PC and your development accounts are joined to the same Azure environment. That is not the case for me.

Using Docker

When using docker, your code won’t have access to Azure CLI or the credentials configured in Visual Studio.

Using Visual Studio credentials never worked for me because my PC is joined to my employer’s Entra ID (and enrolled to Intune, etc.) while my coding during my private time uses my own MSDN account and login. This seems to confuse things and my environment always only used my corporate account.

When using Docker and docker compose, you can feed environment variables to the container and use these to get a working credential.

Thus, my New DefaultAzureCredential() is changed to:

var credential = new ChainedTokenCredential(new DefaultAzureCredential(), new EnvironmentCredential());

This creates a an Azure Credential, but if the DefaultAzureCredential doesn’t success, will fall back to an environmentCredential. This environment credential requires three variables to exist and be populated:

  • AZURE_CLIENT_ID
  • AZURE_CLIENT_SECRET
  • AZURE_TENANT_ID

So, in Visual Studio, update the Docker-compose.override.yml and add the variables:

You can hard-code a value for each directly in this file, or you can create variables in your operating system. Which method you use is up to you, but keep in mind that hard-coding a value here will end up storing that value in your git repo as well, something which isn’t a very good practice.

my program.cs

Note. This code runs fine out the try-catch block locally. However, I am running this in a K8S cluster and without the try-catch, the service never properly starts.