This edition of ReadingNotes covers new AI tools, WSL updates, .NET 9 features, and debugging tips with GitHub Copilot. Plus, insightful podcasts on .NET Aspire, productivity tools, and frontend engineering.
Sharing my Reading Notes is a habit I started a long time ago, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
It's reading notes time! It is a habit I started a long time ago, close to 600 weeks ago in fact, where I share a list of all the articles, blog posts, and books that catch my interest during the week.
Syncing a git branch between Windows and WSL filesystems (Andrew Lock) - It's true that by habit or reflex most people will think of Git as centralized... But it's not! Once you understand that so many possibilities are available. This post both summarize that and shows and example.
Low Code
Testing Your Native AOT Applications - .NET Blog (Marco Rossignoli, Jakub Jareš, Jakub Chocholowicz) - With code ahead-of-time (AOT) we still need tests but how to do it with that stripped version. of .NET? This short tutorial shows how.
How to automate tasks in Windows (David Nield) - Seeing things getting done by some automation you built is a nice feelingThis short post introduces 4 tools that will get you started with automation.
Get Started with GitHub Actions (Mandy Hubbard) - This is a great post to get started with CICD. GitHub is free and accessible, give it a try and then automate some tests or a deployment.
Why tuples in C# are not always a code smell (Dennis Frühauff) - I didn't use Turple yet but it's not hard to see the potential and how it could become ugly. This post shows and explains a few simple rules to stay clear.
Lazy and once-only C# async initialization (Ian Griffiths) - An amazing post that explains so much and that has all the code snippets to be clear about it. I will have to copy-paste those into some test applications to really try them, but it is clear what I should expect.
Good Monday, Already time to share new reading notes. Here is a list of all the articles, blog posts, and podcast episodes that catch my interest during the week.
If you think you may have interesting content, share it!
ASP.NET Core CRUD with NoSQL: Part 2 (Matthew Groves) - In this second post of the series, we learn how to set an index and how to prepare our select to read our data.
An Introduction to JSON (Jack Wallen) - I thought it was a bit older than that. Nice post th o learn more about something we all use.
time to share my reading notes. Those are a curated list of all the articles, blog posts, podcast episodes, and books that catch my interest during the week and that I found interesting. It's a mix of the actuality and what I consumed.
You think you may have interesting content, share it!
Cloud
Automate your work day with workflows in Teams (Harysh Menon) - I’m a big fan (and user) of automation with Power Automate or Logic Apps there is so much we can do! This short post will give you a few ideas for sure.
Why you should ship your silly side projects (Salma Alam-Naylor) - Refreshing post…And go see that website if Geocities means something to you. Totally agree with the shipping project; it is an excellent source of joy and you always learn new stuff.
Miscellaneous
New data: What makes developers happy at work (David Gibson) - Interesting post. So many details…are developers are that different compared to other workers? I feel most people want the same…
Good Monday, already time to share new reading notes.
It is a habit I started a long time ago where I share a list of all the articles, blog posts, podcast episodes, and books that catch my interest during the week.
You think you may have interesting content, share it!
Azure Cosmos DB SQL API – Document Analytics (Modhana) - Finding a good partition for our data is crucial to get the best performance. This post explains how with Celebrata we can visualize them and make smarter choices.
Miscellaneous
No one cares that you’re right (George Stocker) - Interesting post. the comparison of a compiler to humans made me smile, but it works...
I'm not a Docker master, but I understand that it's very useful and I like to use it from time to time in some projects. Another thing I like is DevOps and automation and in a project I have, I was missing that. In the previous setup, the container was built and publish to DockerHub with the date as a tag. Nice but not very easy to now with versions are "stable" and wish one are "in progress".
This post is about how I build a continuous integration and continuous deployment solution for my docker project. All the code is on GitHub and Docker Hub. I sharing my journey so others can enjoy that automation and not spend a weekend building it.
The Goal
By the end of this build, there will be two GitHub Action to build and publish a different version of the application on Docker Hub.
The release version: every time a release is published on GitHub a container tag with the matching version number will be built and published. (ex: myapp:v1)
The beta version: At every push in my branch on GitHub a container will be published with a specific tag. The tag will be matching the draft release version number with -beta. (ex: myapp:v2-beta).
In this post, the application is a Node.js Twitch chatbot. The type of application is not important the post focus on the delivery.
Publishing the release version
Every time a release is published on GitHub, the workflow will be triggered. It will first retrieve the "release version" then build and tag the container with it and finally publish (aka push) it to Docker hub. Because a "release" is also a "stable" version it will also update the container tag latest.
Let's look at the full YAML definition of the GitHub Action and I will break it down after.
name: Release Docker Image CI
on:
release:
types: [published]
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set outputs
id: vars
run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})
- name: Publish to Registry
uses: elgohr/Publish-Docker-Github-Action@master
with:
name: ${{secrets.DOCKER_USER}}/cloudbot
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
tags: "latest,${{ steps.vars.outputs.RELEASE_VERSION }}"
To limit how many times the workflow is triggered I used on: release and the type: published, adjust as you like.
The next interesting part is the lines in the step vars.
- name: Set outputs
id: vars
run: echo ::set-output name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})
Here I use the environment variable GITHUB_REF (striped of the 10 first characters contains "refs/tags/") to initialize a local variable RELEASE_VERSION. The value is available from the outputs of that step, like on the last line of the YAML.
From the steps identified by the id vars I retrieved from the outputs the value of RELEASE_VERSION.
In this GitHub Action, I used elgohr/Publish-Docker-Github-Action@master because it was simple and was doing what I need. You can execute docker commands directly if you prefer or use the docker/github-actions.
Every time a push is done on GitHub, the workflow will be triggered. It will first retrieve the "release version" of the most recent release in draft mode. The workflow will happen -beta to the retrieved version and use this to tag the container. Finally, publish (aka push) it to Docker hub.
Once more, here full YAML, I will break it down after.
Here the difficulty was that I wanted to create a tag from a "future" version. I decided to use the Draft Releases because those are not visible by everyone, therefore they look like the future.
If your last release is version 1 (v1.0), to make this workflow possible you will need to create a new release and save it in Draft.
Like in the Release workflow, I need to retrieve the version. Because drafts are only visible to some people we will need to get access. This is easily done by using a github.token. Those are created automatically when the GitHub Action starts.
And voila, a very simple and easy to implement ci-cd for a container project. There are many different options, looking forward to learning how you did yours?
Serverless Deployment Best Practices (Fernando Medina Corey) - A nice post that shows some of the best practices for serverless and how AWS implements them.
Why jQuery is Obsolete and Time to Stop Using It (Chris Love) - Great post. I was a big user of jQuery, and these days I do less front end stuff, so it is nice to see how things have evolved and to understand the impact jQuery had.
We all do it. We create resources in the cloud for a demo, or a presentation and forget about them. Then at the end of the month, we receive a bigger invoice then expected and it's the panic.
This is why I thought about AzSubscriptionCleaner. It's an open-source project that could be deployed in your subscription very easily. The goal is to have it deployed by one click directly from GitHub.
The tool can be deployed in two versions, using Azure Automation, or Azure Functions. Based on a schedule it will execute a query to search all resources with a tag expireOn with the value is older than now(), and delete them.
I wrote two blog posts, paired with a YouTube video that explain how to tools where built.
This is an open-source project github.com/FBoucher/AzSubcriptionCleaner, you are welcome to see the code, clone the repository, ask for more feature or do a pull request to add a new one!
It's so nice to be able to add some serverless components in our solution to make them better in a snap. But how do we manage them? In this post, I will explain how to create an Azure resource manager (ARM) template to deploy any Azure Function and show how I used this structure to deploy an open-source project I've been working on these days.
Part 1 - The ARM template
An ARM template is a JSON file that describes our architecture. To deploy an Azure Function we need at least three recourses: a functionApp, a service plan, and a storage account.
The FunctionApp is, of course, our function. The service plan could be set as dynamic or describe the type of resource that will be used by your function. The storage account is where is our code.
In the previous image, you can see how those components interact more with each other. Inside the Function, we will have a list of properties. One of those properties will be the Runtime, for example, in the AZUnzipEverything demo, it will be dotnet. Another property will be the connection string to our storage account that is also part of our ARM template. Since that resource doesn't exist yet, we will need to use the dynamic code.
The Function node will contain a sub-resource of type storageAccount. This is where we will specify where is our code, so it cant be clone to Azure.
Building ARM for a Simple Function
Let's see a template for a simple Azure Function that doesn't require any dependency, and we will examine it after.
The first resources listed in the template is the Account Storage. There nothing specific about it.
The Service Plan
The service plan is the second resource in the list. It's important to notice that to be able to use the SKU Dynamic you will need at least the API version of apiVersion to be "2018-02-01". Then you specify the SKU.
"sku": {
"name": "Y1",
"tier": "Dynamic"
}
Of course, you can use the other SKU if you prefer.
The Function App
Final resources added to the mixt, and this is where all the pieces are getting together. It's important to notice that the other in which the resources are listed are not considered by Azure while deploying (it's only for us ;) ). To let Azure knows you need to add dependencies.
This way the Azure Function will be created after the service plan and the storage account are available. Then in the properties we will be able to build the ConnectionString to the blob storage using a reference.
The last piece of the puzzle is the sub-resource sourcecontrol inside the FunctionApp. This will define where Azure should clone the code from and in which branch.
To be sure that everything is fully automatic the properties publishRunbook and IsManualIntegration must be set as true. Otherwise, you will need to do a synchronization between your Git (in this case on GitHub), and the Git in Azure.
Of course, all the source code of both the Azure Function and the ARM template are available on GitHub, but let me highlight how the containers are defined from an ARM template.
Just like with sourcecontrol, we will need to add a list of sub-resources to our storage account. The name MUST start by 'default/'.
Part 2 - Four Deployment Options
Now that we have a template that describes our needs we just need to deploy it. There are multiple ways it could be done, but let's see four of them.
Deploy from the Azure Portal
Navigate to the Azure Portal (https://azure.portal.com), from your favorite browser and search for "deploy a custom template" directly in the search bar located at the top of the screen (in the middle). Or go at https://portal.azure.com/#create/Microsoft.Template. One in the Custom deployment page, click on the link Build your own template in the editor. From there, you can copy-paste or upload your ARM template. You need to save it to see the real deployment form.
Deploy with a script
Would it be in PowerShell or in Azure CLI you can easily deploy your template with these two commands.
In Azure CLI
# create resource group
az group create -n AzUnzipEverything -l eastus
# deploy it
az group deployment create -n cloud5mins -g AzUnzipEverything --template-file "deployment\deployAzure.json" --parameters "deployment\deployAzure.parameters.json"
In PowerShell
# create resource group
New-AzResourceGroup -Name AzUnzipEverything -Location eastus
# deploy it
New-AzResourceGroupDeployment -ResourceGroupName AzUnzipEverything -TemplateFile deployment\deployAzure.json
Deploy to Azure Button
One of the best way to help people to deploy your solution in their Azure subscription is the Deploy to Azure Button.
You need to create an image link (in HTML or Markdown) to this to a special destination build in two-part.
However, this URL needs to be encoded. There is plenty of encoders online, but you can also do it from the terminal with the following command (A big thanks to @BrettMiller_IT who showed me this trick during one of my live streams).
Clicking the button will bring the user at the same page on the Azure Portal but in the user subscription.
Azure DevOps Pipeline
From the Azure DevOps portal (https://dev.azure.com), select your project and create a new Release Pipeline. Click on the + Add an artifact button to connect your Git repository.
Once it's added, you need to add a task the current job. Click on the link 1 job, 0 task (4). Now you just need to specify your Azure subscription, the name of the resource group and select the location of your ARM template inside your repository. To make the deployment automatic with each push in the repository, click that little lightning bolt and enable the Continuous deployment trigger.
Wrapping-up
Voila, you know have four different ways to deploy your Azure Function automatically. But don't take my word for it, try it yourself! If you need more details you can visit the project on GitHub or watch this video where I demo the content of this post.
Static websites are lightning fast, and running them inside an Azure Blob Storage instead of a WebApp is incredibly economical (less than $1/ month). Does it mean you need to do everything manually? Absolutely not! In a previous post I explained how to automatically generated your static website using a Build Pipeline inside Azure DevOps. In this post, let's complete the CI-CD by creating a Release Pipeline to deploy it.
The Azure Resource Manager (ARM) Template
First thing first. If we want our release pipeline to deploy our website in Azure, we need first to be sure our Resources are available "up there." The best way to do this is by using an Azure Resource Manager (ARM template). I will use the same project started in the previous post, feel free to adapt to your structure or copy it from it.
Create a new file named deploy.json in the deployment folder. We need a simple storage account.
I used a parameter (StorageName) to define the name of the storage account. This way I could have multiple pipelines deploying in different storages.
Not to make the ARM template accessible to the release pipeline we also need to publish it. The easiest way to do it is to add another Copyfile task in our azure-pipeline. Add this task just before the PublishBuildArtifacts.
Once you commit and push these changes, it will trigger a build. When done, the ARM template will be available, and we will be able to start working on the release pipeline.
The Release Pipeline
Navigate to the DevOps project created in the previous post. This time, create a new Release Pipeline. When asked, select an empty template, we will pick manually the tasks we need.
First, we need to define the trigger and where are our artifacts. Click on the thing at the left of the screen. Select the build projects and let's use the latest version of the artifact to our deployment.
To get a continuous deployment, you need to enable it by clicking on the lightning bolt and selecting the enabled button.
Now let's select our tasks. Click on the "+" sign to add new tasks. We need three of these: Azure Resource Group Deployment, Azure CLI, and Azure File Copy.
Task 1 - Azure Resource Group Deployment
The first one will be an Azure Resource Group Deployment. The will be used to deploy our ARM template and be sure that the resources are available in Azure.
To configure the ARM deployment we need to select the Azure subscription and authorize the pipeline to have access. Then you will need to specify the name of the resource group you will be deploying into; it's location and finally points where is the linked ARM template.
Task 2 - Azure CLI
The second one is an Azure CLI. As I am writing this post, it's not possible to enable the static website property of a storage account. Therefore we will execute an Azure CLI command to change that configuration. Once you picked the Azure subscription, select inline script and enter this Azure CLI command:
az storage blob service-properties update --account-name wyamfrankdemo --static-website --index-document index.html
This will enable the static website property of the storage account named wyamfrankdemo, and set the default document to index.html.
Task 3 - Azure File Copy
The last task is an Azure File Copy to copy all our files from $(System.DefaultWorkingDirectory)/drop/drop/outpout to the $web container (in our Azure Blob storage). The container must be named $web, that's the name used by Azure for the static website.
Wrapping up
Once you are done configuring the Release Pipeline, it's time to save it and run it. After only a minute or two (this demo is pretty small) the blog should be available into Azure. To find your endpoint (aka URL) you can go into the portal.azure.com and look at the static website property of the blob storage that we just create.
I have that little website, a blog that doesn't consume much bandwidth, and I was looking to optimize it. Since Azure blob storage is such a low expensive resource, I thought it would be the perfect fit. I could use a static website generator to transform my markdown file into a nice looking blog and publish that in Azure! Using Azure DevOps pipeline I could at every "git push)" do that all automatically without having anything installed on my machine... meaning I could write a new blog post from anywhere and still be able to update my blog.
In this post, I will explain all the steps required to create a continuous integration and continuous deployment process to deploy a static website into Azure.
The Goal
The idea here is to have on a local machine a folder tracked by git. At every push, we want that change to trigger our CI-CD process. The Build Pipeline will generates the static website. The Release Pipeline will create our Azure resources and publish those artifacts.
The Static Website
In this post, I'm using Wyam.io as static website generator. However, it doesn't matter. There is a ton of excellent generator available: Jekyll, Hugo, Hexo, etc. I selected Wyam because it is written in .Net and If eventually, I want to dig dipper it would be easier for me.
For all those generated websites, it the same pattern. You have an input folder where you have all your posts and images and an output folder that contains the generated result. You don't need to track the content of your output folder, so it would be a good practice to modify the .gitignore file accordingly. As an example here how look mine.
The build pipeline will generate our website for us. There so, it needs to have the generator installed. A great tool to do this kind of tasks is Cake. What I like with that tool is that it is cross platform so I can use it without worrying on wish OS it will run.rd.
The Azure pipeline is defined in an azure-pipeline.yml file. Installing Cake should definitely be in our first steps. To know how to do that, navigate to the Get started page of the Cake's website, it's explained that we need to execute a build.ps1 or build.sh (depending on your build setup). That will install Cake and execute the file build.cake. Those files can be found on the GitHub repository as mentioned on the website.
On the Wyam website, in the deployment section of the documentation, you will find a sample for our required build.cake file. It looks like this:
On the first line, it will install the required NuGet package (you should definitely specify the version). Then it defines some tasks, and run the generation command. Create that file at the root of the website folder.
Now let's have a look at the azure-pipeline.yml file.
The first line is to specify the pipeline trigger. In our case, we will look at the master branch. Then I declare a variable to keep the .Net Core version. That way, it will be easier to maintain the script in the future.
The pool command is to specify what kind of server is created. Here I'm using a Windows one, yet I could have used Linux too (all components are cross-platform).
Then comes the list of steps. The first one install .Net Core. The second step is a powershell command to execute our build.ps1 file. At this stage, the static website should be generated in a subfolder output. The last two steps are to copy the content of the output folder into the ArtifactStagingDirectory and then publish it. This way the Release Pipeline can access the artifacts.
There is detailed information about all the commands for a YAML Azure Pipeline file in the documentation. Create your own or copy-paste this one in a new azure-pipeline.yml file under a subfolder named deployment. Once your file is created, commit and push them to GitHub or any repository.
Navigate to Azure DevOps (dev.azure.com). Open your project, or create a new one. Now from the left menu click on the Pipeline (the rocket icon), to create a new one. If you are using an external repository, like me, you will need to authorize Azure DevOps to your repo.
To configure the pipeline, since we already have created the azure-pipeline.yml file, select the Existing Azure Pipeline YAML file option and point it to our file in the deployment folder.
It will open our YAML file. If you wish you could update it. Run it, by clicking to Run blue button in the top-right corner. Your build pipeline is done. Now every time you will push changes into your repository that build will get triggered and generate the static website.
Virtual machines (VM) are used in most solutions nowadays as a [ProcessName] server, temporary machine to run tests or make demos, and sometimes even as a development machine. One of the great benefits of the cloud is that you only pay for what you use. So unlike the old server, that you keep paying for, you won pay virtual machine's CPU for when you turned off! In this post, I explain how to do it with your existing machines and also what to do with all the future one that you will be creating.
From the Azure portal (portal.azure.com), select the Virtual Machine (VM) that you which to edit. Then look at the option panel, on the left, for Auto-Shutdown in the Operations section. You should have something that looks like this:
At any time you can enable and disable that functionality, it won’t affect the running VM.
Now, to activate it click on the Enabled. Then Select the time you would like to see the VM shutdown. Be sure to select the good time zone, by default it’s UTC. You can adjust the at for UTC of change the time zone, both options are valid.
Now you could decide to enable the notification. That could be useful if you may want to postpone the shutdown for one or two hours, or integrate the shutdown to another process like backup, cleaning…
To activate the notification option just click on the enabled, and enter the email address. If you want to attach the shutdown to a Logic App or an Azure Functions use the webhook. Here an example of notification email, see the Postpone options link.
What if you have many VMs running
Let's say you have already twenty (or more) VMs running, you could have executed a PowerShell script like:
Update - 2018-03-14 Well, today this is only possible for VM part of a DevTest Labs. Not for "regular" VM. However, I'm sure that day will come pretty quick.Does that mean that you need to go in all your VMs and set it manually? No. You can use an Azure Automation that will stop a list of VM on a regular schedule. A big advantage of this solution is that you can be more creative since it offers a lot more flexibility. You could identify the VM to shutdown base on some TAGS, you could have a different schedule base on the week vs weekend. You could even have a task to start VMs in the morning... More to come on that topic in a future post... If you want to read about how to get started to Azure Automation click here.
Multiple VMs that already exist, no problem
Obviously, if you have multiple virtual machines that already exist it is not very efficient to change their configuration one by one via the portal. Here is a small script to change the configuration of a large amount of VM in one shot.
The variable $selectedVMs contains all the VMS that we wish to edit. In this sample, I only get VMs contained in the RessourceGroup cloud5mins, but there are no limits to what you can do. You could select all VMs with a specific OS, tags, location, name, etc.
The variable $ScheduledShutdownResourceId will be the identity for the configuration for the auto-shutdown we wish to inject. Note that the provider is microsoft.devtestlab.
Next, we create a collection of properties in $Properties. status the one that active or deactivate the auto-shutdonw. targetResourceId is the resourceID of the VM we target.
The only things left is to specify the time and timezone.
If you prefer, I also have a video version that explains all the steps.
How to shutdown automatically all your existing VMs
End Update
Let's create a VM with the auto-shutdown pre-configured with ARM
Of course, a much more efficient way to set the auto-shutdown is at the creation time by adding a new resource of type Microsoft.DevTestLab/schedules to your template. This option was previously only accessible for DevTestLab, but recently was made available to any VMs.
Here an example of the variables that could be added to your template.
And here an example of Microsoft.DevTestLab/schedules resource. One of these should be added for every VM you wish to auto-shutdown. Because your script is for one server, however, only one instance is required.
Azure Logic Apps - Retry Policy (Sriram Hariharan) - This post explains how to customize a key feature to build successful integration solution in cloud programming.
Submitting your first Pull Request (David Paquette) - Nice post, that demystifies a lo o things, if Git is new for you or that open source projects.
Task runners in Visual Studio 2015 (Mads Kristensen) - This post regroups many extensions to Visual Studio that will definitely speed up repetitive process.
What’s New in Fiddler 4.6.2 (Eric Law) - In today's world of connectivity, a tool like Fiddler is a must. This post shared the features of the last release, and what need to be done once you upgrade.
.NET Rocks! vNext - Really interesting podcast episode. I think it's the clearer and more comprehensible explanation of Git I have ever eared.
Miscellaneous
No, you can’t join my wifi network (Troy Hunt) - Just after the holidays, after a lot of people visit others to their houses... This is a scary but really interesting post to read.
A Project Guide to UX Design And Content Strategy - Very nice document that tells us how a team organizes their work to create a great user experience. Step by step, we pass what look like a real case.