Every Friday, I will share a recap of the week and at the same time a summary of my streams. Those videos are at least two hour longs, so I thought a short summary to know if topic interest you could be useful.
This is some basic, sample markdown.
Every Friday, starting now, I will share a recap of the week and at the same time a summary of my streams. Those videos are at least two hour longs, so I thought a short summary to know if topic interest you could be useful.
Useful Links:
Learning how to use a Blazor WebAssembly with Azure AD Token c5m.ca/stream-ep109
Most solutions, if not all, are composed of multiple parts: backend, frontend, services, APIs, etc. Because all parts could have a different life-cycle it's important to be able to deploy them individually. However, sometimes we would like to deploy everything at once. It's exactly the scenario I had in a project I'm working on where with backend and one frontend.
In this post, I will explain how I use nested Azure Resource Manager (ARM) templates and conditions to let the user decide if he wants to deploy only the backend or the backend with a frontend of his choice. All the code will be available in GitHub and if you prefer, a video version is available below. (This post is also available in French)
The Context
The project used in this post my open-source budget-friendly Azure URL Shortener. Like mentioned previously the project is composed of two parts. The backend leverage Microsoft serverless Azure Functions, it a perfect match in this case because it will only run when someone clicks a link. The second part is a frontend, and it's totally optional. Because the Azure Functions are HTTP triggers they act as an API, therefore, they can be called from anything able to do an HTTP call. Both are very easily deployable using an ARM template by a PowerShell or CLI command or by a one-click button directly from GitHub.
The Goal
At the end of this post, we will be able from one-click to deploy just the Azure Functions or to deploy them with a frontend of our choice (I only have one right now, but more will come). To do this, we will modify the "backend" ARM template using condition and nest the ARM template responsible for the frontend deployment.
The ARM templates are available here in there [initial](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before) and [final](https://github.com/FBoucher/AzUrlShortener/tree/master/tutorials/optional-arm/before/after) versions.
Adding New Inputs
We will nest the ARM templates, this means that our backend template (azureDeploy.json) will call the frontend template (adminBlazorWebsite-deployAzure.json). Therefore we need to add all the required information to azureDeploy.json to make sure it's able to deploy adminBlazorWebsite-deployAzure.json successfully. Looking at the parameter required for the second template, we only two need values AdminEMail and AdminPassword. All the other can be generated or we already have them.
We will need also another parameter the will act as our selection option. So let's add a parameter named frontend and allowed only two values: none and adminBlazorWebsite. If the value is none we only deploy the Azure Function. When the value is adminBlazorWebsite we will deploy the Azure Function, of course, but we will also deploy an admin website to go with it.
Following the best practices, we add clear detail and add those three parameters in the parameters section of the ARM template
"frontend": {
"type": "string",
"allowedValues": [
"none",
"adminBlazorWebsite"
],
"defaultValue": "adminBlazorWebsite",
"metadata": {
"description": "Select the frontend that will be deploy. Select 'none', if you don't want any. Frontend available: adminBlazorWebsite, none. "
}
},
"frontend-AdminEMail": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "(Required only if frontend = adminBlazorWebsite) The EMail use to connect into the admin Blazor Website."
}
},
"frontend-AdminPassword": {
"type": "securestring",
"defaultValue": "",
"metadata": {
"description": "(Required only if frontend = adminBlazorWebsite) Password use to connect into the admin Blazor Website."
}
}
Nested Templates
Let's assume for now that we always deploy the website when we deploy the Azure Function, to keep things simple. What we need now is to used nested ARM template, and that when you deploy an ARM template from inside another ARM template. This is done with a Microsoft.Resources/deployments node. Let's look at the code:
If we examine this node, we have the classic: name, type, dependsOn, resourceGroup, apiVersion. Here We really want the Azure Functions to be fully deployed so we need the FunctionApp to be created AND the GitHub sync to be complete, this is why there is also a dependency on Microsoft.Web/sites/sourcecontrols.
In properties we will pass the mode as Incremental as it will leave unchanged resources that exist in the resource group but aren't specified in the template.
The second property is templateLink. This is really important as it's the URL to the other ARM template. That URI must not be a local file or a file that is only available on your local network. You must provide a URI value that downloadable as HTTP or HTTPS. In this case, it's a variable that contains the GitHub URL where the template is available.
Finally, we have the parameters, and this is how we pass the values to the second template. Let's skip those where I just pass the parameter value from the caller to the called, and focus on basename, AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl.
For basename I just add a prefix to the parameter basename received, this way the resource names will be different but we can still see the "connection". That's purely optional, you could have added this value in a parameter to azureDeploy.json, I prefer keeping the parameters a minimum as possible as I think it simplifies the deployment for the users.
Finally for AzureFunctionUrlListUrl, and AzureFunctionUrlShortenerUrl I needed to retrieve the URL of the Azure Function with the security token because they are secured. I do that by concatenating different parts.
Component
Value
Beginning of the URL
'https://'
Reference the Function App, return the value of hostname
Now that the second ARM template can be deployed, let's add a condition so it gets, indeed, deploy only when we desire. To do this it's very simple, we need to add a property condition.
In this case, is the value of the parameter is different then none, the nested template will be deployed. When a condition end-up being "false", the entire resource will be ignored during the deployment. How simple or complex are your conditions... that's your choice!
I don't know for you but I share links/ URLs very often. And a lot of time it's from videos, so it needs to be short and easy to remember. Something like https://c5m.ca/project is better than a random string (aka. GUID). And this is how I started a project to build a URL Shortener. I wanted to be budget-friendly, easy to deploy and customizable.
In this post, I will share how I build it, how you can use it, and how you can help!
How I build it, with the community
This tool was build during live streams coding sessions on Twitch (all videos are in available in my YouTube archive). It's composed of two parts: a Serverless backend leveraging the Azure Function & Azure Storage, and a frontend of your choice.
The backend is composed of a few Azure Functions that act as an on-demand HTTP API. They only consume when they are called. They are in .Net Core, C# to be specific. When publishing this post, there are four functions:
UrlShortener: To create a short URL.
UrlRedirect: That's the one called when a short link is used. An Azure Function Proxy is forwarding all call to the root.
UrlClickStats: Return the statistic for a specific URL.
UrlList: Return the list of all URLs created.
All the information like long url, short url, click count are save in an Azure Storage Table.
And that's it. Super light, very cost-efficient. IF you are curious about the price I'll but references in the footnotes
The frontend could be anything that can make HTTP requests. Right now in the project, I explain how to use a tool call Postman, there is also a very simple interface done that you can easily deploy.
This simple interface is of course protected and gives you the options to see all URLs and create new ones.
How YOU can use it
All the code is available into GitHub, and it's deployable with a one-click button!
This will deploy the backend in your Azure subscription in a few minutes. If you don't own an Azure subscription already, you can create your free Azure account today.
Then you will probably want an interface to create your precious URLs. Once more in the GitHub repository, there is a List of available Admin interfaces and ready to be used. The Admin Blazor Website is currently the most friendly and can also be deployed in one-click.
How You can help and participate
Right now, there is really only one interface (and some instructions on how to use Postman to do the HTTP calls). But AzUrlShortener is an open-source project, meaning you can participate. Here some suggestions:
Build a new interface (in the language of your choice)
Improve current interface(s) with
logos
designs
Better UI 🙂
Register bugs in GitHub
Make feature request
Help with documentation/ translation
What's Next
Definitely come see the GitHub repo https://github.com/FBoucher/AzUrlShortener, click those deploy buttons. On my side, I will continue to add more features and make it better. See you there!
Are you worried when deploying a new application about our billing? Or afraid that you will receive an invoice from your cloud provider at the and of the month that put you in a bad situation? You are not alone, and this is why you should definitely look at the Azure Cost Manager.
In the video below, I explain how to use the Azure Cost Manager to see the actual cost and forecast for a specific application in the cloud and build a dashboard with your favorites information. Note that I said "cloud" because Azure Cost Manager can monitor both Azure and AWS resources. That's perfect for when we are "in" the portal, and that's why I will also show how you can make budgets, and set alerts, so you can track your resources automatically.
We all do it. We create resources in the cloud for a demo, or a presentation and forget about them. Then at the end of the month, we receive a bigger invoice then expected and it's the panic.
This is why I thought about AzSubscriptionCleaner. It's an open-source project that could be deployed in your subscription very easily. The goal is to have it deployed by one click directly from GitHub.
The tool can be deployed in two versions, using Azure Automation, or Azure Functions. Based on a schedule it will execute a query to search all resources with a tag expireOn with the value is older than now(), and delete them.
I wrote two blog posts, paired with a YouTube video that explain how to tools where built.
This is an open-source project github.com/FBoucher/AzSubcriptionCleaner, you are welcome to see the code, clone the repository, ask for more feature or do a pull request to add a new one!
It's so nice to be able to add some serverless components in our solution to make them better in a snap. But how do we manage them? In this post, I will explain how to create an Azure resource manager (ARM) template to deploy any Azure Function and show how I used this structure to deploy an open-source project I've been working on these days.
Part 1 - The ARM template
An ARM template is a JSON file that describes our architecture. To deploy an Azure Function we need at least three recourses: a functionApp, a service plan, and a storage account.
The FunctionApp is, of course, our function. The service plan could be set as dynamic or describe the type of resource that will be used by your function. The storage account is where is our code.
In the previous image, you can see how those components interact more with each other. Inside the Function, we will have a list of properties. One of those properties will be the Runtime, for example, in the AZUnzipEverything demo, it will be dotnet. Another property will be the connection string to our storage account that is also part of our ARM template. Since that resource doesn't exist yet, we will need to use the dynamic code.
The Function node will contain a sub-resource of type storageAccount. This is where we will specify where is our code, so it cant be clone to Azure.
Building ARM for a Simple Function
Let's see a template for a simple Azure Function that doesn't require any dependency, and we will examine it after.
The first resources listed in the template is the Account Storage. There nothing specific about it.
The Service Plan
The service plan is the second resource in the list. It's important to notice that to be able to use the SKU Dynamic you will need at least the API version of apiVersion to be "2018-02-01". Then you specify the SKU.
"sku": {
"name": "Y1",
"tier": "Dynamic"
}
Of course, you can use the other SKU if you prefer.
The Function App
Final resources added to the mixt, and this is where all the pieces are getting together. It's important to notice that the other in which the resources are listed are not considered by Azure while deploying (it's only for us ;) ). To let Azure knows you need to add dependencies.
This way the Azure Function will be created after the service plan and the storage account are available. Then in the properties we will be able to build the ConnectionString to the blob storage using a reference.
The last piece of the puzzle is the sub-resource sourcecontrol inside the FunctionApp. This will define where Azure should clone the code from and in which branch.
To be sure that everything is fully automatic the properties publishRunbook and IsManualIntegration must be set as true. Otherwise, you will need to do a synchronization between your Git (in this case on GitHub), and the Git in Azure.
Of course, all the source code of both the Azure Function and the ARM template are available on GitHub, but let me highlight how the containers are defined from an ARM template.
Just like with sourcecontrol, we will need to add a list of sub-resources to our storage account. The name MUST start by 'default/'.
Part 2 - Four Deployment Options
Now that we have a template that describes our needs we just need to deploy it. There are multiple ways it could be done, but let's see four of them.
Deploy from the Azure Portal
Navigate to the Azure Portal (https://azure.portal.com), from your favorite browser and search for "deploy a custom template" directly in the search bar located at the top of the screen (in the middle). Or go at https://portal.azure.com/#create/Microsoft.Template. One in the Custom deployment page, click on the link Build your own template in the editor. From there, you can copy-paste or upload your ARM template. You need to save it to see the real deployment form.
Deploy with a script
Would it be in PowerShell or in Azure CLI you can easily deploy your template with these two commands.
In Azure CLI
# create resource group
az group create -n AzUnzipEverything -l eastus
# deploy it
az group deployment create -n cloud5mins -g AzUnzipEverything --template-file "deployment\deployAzure.json" --parameters "deployment\deployAzure.parameters.json"
In PowerShell
# create resource group
New-AzResourceGroup -Name AzUnzipEverything -Location eastus
# deploy it
New-AzResourceGroupDeployment -ResourceGroupName AzUnzipEverything -TemplateFile deployment\deployAzure.json
Deploy to Azure Button
One of the best way to help people to deploy your solution in their Azure subscription is the Deploy to Azure Button.
You need to create an image link (in HTML or Markdown) to this to a special destination build in two-part.
However, this URL needs to be encoded. There is plenty of encoders online, but you can also do it from the terminal with the following command (A big thanks to @BrettMiller_IT who showed me this trick during one of my live streams).
Clicking the button will bring the user at the same page on the Azure Portal but in the user subscription.
Azure DevOps Pipeline
From the Azure DevOps portal (https://dev.azure.com), select your project and create a new Release Pipeline. Click on the + Add an artifact button to connect your Git repository.
Once it's added, you need to add a task the current job. Click on the link 1 job, 0 task (4). Now you just need to specify your Azure subscription, the name of the resource group and select the location of your ARM template inside your repository. To make the deployment automatic with each push in the repository, click that little lightning bolt and enable the Continuous deployment trigger.
Wrapping-up
Voila, you know have four different ways to deploy your Azure Function automatically. But don't take my word for it, try it yourself! If you need more details you can visit the project on GitHub or watch this video where I demo the content of this post.
I recently presented, a workshop at the TOHack to get started with Azure. The goal was to try different Azure services, and see how we could augment an existing website using serverless function and artificial intelligence. (Aussi disponible en français)
During this workshop, a website is deployed automatically from GitHub. Then by adding an Azure Function and using the Vision API of Azure Cognitive Services, the final solution is able to detect when uploaded pictures are or not dogs and keep our image folder "clean". We call that application: The automatic Not a Dog application.
The step by step instruction with the code can be found on GitHub - Not-a-Dog-Workshop. The workshop can be done in about 45-60 minutes.
I also did a video that is available on my YouTube channel:
You have questions, you are blocked, it will be a pleasure to help you.
In a project using Azure Logic Apps that I am working on, I needed to manipulate strings. I could create APIs or Azure Functions, but the code is very simple and is not using any external libraries. In this post, I will show you how to use the new Inline Code to execute your code snippet directly inside your Logic Apps.
Quick Context
The Logic App will read a file from my OneDrive (it will also work with DropBox, Box, etc.). Here an example of the file:
Nice tutorial that explains how to build, using postman, an efficient API.[cloud.azure.postman.tools]
The goal is to extract tags, contained between the square brackets, from the text.
Logic App: Get File Content
From the Azure Portal, create a new Logic App by clicking the big green "+" button in the top left corner and searching for Logic App.
For this demo, I will use the Interval as a trigger because I will execute the Logic App manually.
The first step will be a Get File Content action from the OneDrive connector. Once you authorized Azure to access your OneDrive folder, select the file you want to read. For me, it's /dev/simpleNote.txt
Integration Account
To access the workflowContext the Azure Logic App required an Integration account. Next step would be to create one. Save the current Logic App, and click on the big "+" button in the top right corner. This time search for integration. Select Integration Account, and complete the form to create it.
We now need to assign it to our Logic App. From the Logic App blade, in the options list select Workflow Settings. Then select your integration account, and don't forget to save!
Logic App: Inline Code
To add the action at the end of your workflow, click the New step button. Search for Inline Code, and select the action Execute JavaScript Code.
Before copy-pasting the code into the new Inline Code action let's have a quick look.
var note = "" + workflowContext.actions.Get_file_content.outputs.body;
var posTag = note.lastIndexOf("[") + 1;
var cleanNote = {};
if(posTag > 0){
cleanNote.tags = note.substring(posTag, note.length-1);
cleanNote.msg = note.substring(0,posTag-1);
}
return cleanNote;
On the first line, we assign a variable note the content of the Get_file_content outputs. We access it using the workflowContext. This context has access to the trigger and the actions. To find the name of the action you can replace the spaces by the underscore character "_".
You can also switch to Code View, and see the name of all components from the JSON code.
Logic App: Use Inline Code Result
Of course, you can use the output of your Inline Code with other steps. You just need to use the Result from the dynamic content menu.
If for some reason the dynamic content list doesn't contain your Inline Code, you can always add the code directly @body('Cleaning_Note')?['body'].
Your Logic App should now look like this:
Verdict
The Inline code is very promising. Right now it's limited to JAvaScript and cannot access variable nor loops. However, for simple code that doesn't require any references, it's easier to maintain and deploy. You can learn more about what is exactly covered or not here.
And it works as this result shows.
You are done with your code and you are ready to deploy it in Azure. You execute the PowerShell or Bash script you have and BOOM! The error message saying that this name is already taken. In this post, I will show you a simple way to look like a boss and make your deployment working all the time.
____ with given name ____ already exists.
The tricks other use
You could try to add a digit at the end of the resource name (ex: demo-app1, demo-app2, demo-app123...), but that’s not really professional. You could create a random string and append it to the name. Yes, that will works, once. If you are trying to redeploy your resources that value will change, therefore it will never be the same.
The solution would be to have a unique string that is constant in our environment.
The solution
The solution is to use the function UniqueString() part of the Azure Resource Manager (ARM) template. If we look in the documentation, UniqueString creates a deterministic hash string based on the values provided as parameters. Let’s see a quick example of an ARM template to deploy a website named demo-app.
If you try to deploy this template, you will have an error because the name demo-app is already taken... no surprise here.
Let’s create a new variable suffix and we will use the Resource Group Id and Location as values. Then we just need to append this value to our name using the function concat().
It’s that simple! Now every time you will deploy a unique string will be added to your resource name. That string will always be the same for a Resource Group-Location deployment.
Because some resource types are more restrictive than others you may need adapt your new name. Maybe the name of your resource plus those thirteen characters hash will be too long... No problem, you can easily make it shorter and all lower case just by using substring() and toLower().
Voila, and now by using ARM template you can deploy and redeploy without any problem reproducing the same solution you built. To learn move about the ARM template you can jump in the documentation, where you will find samples, step-by-step tutorials and more.
If you have a specific question about ARM templates or if you would like to see more tips like this one, don't hesitate to ask in the comments section or reach out on social media!
Static websites are lightning fast, and running them inside an Azure Blob Storage instead of a WebApp is incredibly economical (less than $1/ month). Does it mean you need to do everything manually? Absolutely not! In a previous post I explained how to automatically generated your static website using a Build Pipeline inside Azure DevOps. In this post, let's complete the CI-CD by creating a Release Pipeline to deploy it.
The Azure Resource Manager (ARM) Template
First thing first. If we want our release pipeline to deploy our website in Azure, we need first to be sure our Resources are available "up there." The best way to do this is by using an Azure Resource Manager (ARM template). I will use the same project started in the previous post, feel free to adapt to your structure or copy it from it.
Create a new file named deploy.json in the deployment folder. We need a simple storage account.
I used a parameter (StorageName) to define the name of the storage account. This way I could have multiple pipelines deploying in different storages.
Not to make the ARM template accessible to the release pipeline we also need to publish it. The easiest way to do it is to add another Copyfile task in our azure-pipeline. Add this task just before the PublishBuildArtifacts.
Once you commit and push these changes, it will trigger a build. When done, the ARM template will be available, and we will be able to start working on the release pipeline.
The Release Pipeline
Navigate to the DevOps project created in the previous post. This time, create a new Release Pipeline. When asked, select an empty template, we will pick manually the tasks we need.
First, we need to define the trigger and where are our artifacts. Click on the thing at the left of the screen. Select the build projects and let's use the latest version of the artifact to our deployment.
To get a continuous deployment, you need to enable it by clicking on the lightning bolt and selecting the enabled button.
Now let's select our tasks. Click on the "+" sign to add new tasks. We need three of these: Azure Resource Group Deployment, Azure CLI, and Azure File Copy.
Task 1 - Azure Resource Group Deployment
The first one will be an Azure Resource Group Deployment. The will be used to deploy our ARM template and be sure that the resources are available in Azure.
To configure the ARM deployment we need to select the Azure subscription and authorize the pipeline to have access. Then you will need to specify the name of the resource group you will be deploying into; it's location and finally points where is the linked ARM template.
Task 2 - Azure CLI
The second one is an Azure CLI. As I am writing this post, it's not possible to enable the static website property of a storage account. Therefore we will execute an Azure CLI command to change that configuration. Once you picked the Azure subscription, select inline script and enter this Azure CLI command:
az storage blob service-properties update --account-name wyamfrankdemo --static-website --index-document index.html
This will enable the static website property of the storage account named wyamfrankdemo, and set the default document to index.html.
Task 3 - Azure File Copy
The last task is an Azure File Copy to copy all our files from $(System.DefaultWorkingDirectory)/drop/drop/outpout to the $web container (in our Azure Blob storage). The container must be named $web, that's the name used by Azure for the static website.
Wrapping up
Once you are done configuring the Release Pipeline, it's time to save it and run it. After only a minute or two (this demo is pretty small) the blog should be available into Azure. To find your endpoint (aka URL) you can go into the portal.azure.com and look at the static website property of the blob storage that we just create.
I have that little website, a blog that doesn't consume much bandwidth, and I was looking to optimize it. Since Azure blob storage is such a low expensive resource, I thought it would be the perfect fit. I could use a static website generator to transform my markdown file into a nice looking blog and publish that in Azure! Using Azure DevOps pipeline I could at every "git push)" do that all automatically without having anything installed on my machine... meaning I could write a new blog post from anywhere and still be able to update my blog.
In this post, I will explain all the steps required to create a continuous integration and continuous deployment process to deploy a static website into Azure.
The Goal
The idea here is to have on a local machine a folder tracked by git. At every push, we want that change to trigger our CI-CD process. The Build Pipeline will generates the static website. The Release Pipeline will create our Azure resources and publish those artifacts.
The Static Website
In this post, I'm using Wyam.io as static website generator. However, it doesn't matter. There is a ton of excellent generator available: Jekyll, Hugo, Hexo, etc. I selected Wyam because it is written in .Net and If eventually, I want to dig dipper it would be easier for me.
For all those generated websites, it the same pattern. You have an input folder where you have all your posts and images and an output folder that contains the generated result. You don't need to track the content of your output folder, so it would be a good practice to modify the .gitignore file accordingly. As an example here how look mine.
The build pipeline will generate our website for us. There so, it needs to have the generator installed. A great tool to do this kind of tasks is Cake. What I like with that tool is that it is cross platform so I can use it without worrying on wish OS it will run.rd.
The Azure pipeline is defined in an azure-pipeline.yml file. Installing Cake should definitely be in our first steps. To know how to do that, navigate to the Get started page of the Cake's website, it's explained that we need to execute a build.ps1 or build.sh (depending on your build setup). That will install Cake and execute the file build.cake. Those files can be found on the GitHub repository as mentioned on the website.
On the Wyam website, in the deployment section of the documentation, you will find a sample for our required build.cake file. It looks like this:
On the first line, it will install the required NuGet package (you should definitely specify the version). Then it defines some tasks, and run the generation command. Create that file at the root of the website folder.
Now let's have a look at the azure-pipeline.yml file.
The first line is to specify the pipeline trigger. In our case, we will look at the master branch. Then I declare a variable to keep the .Net Core version. That way, it will be easier to maintain the script in the future.
The pool command is to specify what kind of server is created. Here I'm using a Windows one, yet I could have used Linux too (all components are cross-platform).
Then comes the list of steps. The first one install .Net Core. The second step is a powershell command to execute our build.ps1 file. At this stage, the static website should be generated in a subfolder output. The last two steps are to copy the content of the output folder into the ArtifactStagingDirectory and then publish it. This way the Release Pipeline can access the artifacts.
There is detailed information about all the commands for a YAML Azure Pipeline file in the documentation. Create your own or copy-paste this one in a new azure-pipeline.yml file under a subfolder named deployment. Once your file is created, commit and push them to GitHub or any repository.
Navigate to Azure DevOps (dev.azure.com). Open your project, or create a new one. Now from the left menu click on the Pipeline (the rocket icon), to create a new one. If you are using an external repository, like me, you will need to authorize Azure DevOps to your repo.
To configure the pipeline, since we already have created the azure-pipeline.yml file, select the Existing Azure Pipeline YAML file option and point it to our file in the deployment folder.
It will open our YAML file. If you wish you could update it. Run it, by clicking to Run blue button in the top-right corner. Your build pipeline is done. Now every time you will push changes into your repository that build will get triggered and generate the static website.
I published a video that explains how to UnZip files without any code by using Logic Apps. However, that solution didn't work for bigger files or different archive type. This post is to explain how you can use the Azure Function to cover those situations. This first iteration supports "Zip" files; all the code is available in my GitHub.
Prerequisites
To create the Azure Function, I will use the excellent Azure Function extension of Visual Studio Code. You don't "need" it. However, it makes thing very easy.
You can easily install the extension from Inside Visual Studio Code by clicking on the extension button in the left menu. You will also need to install the Azure Function Core Tools
Creating the Function
Once the extension installed, you will find a new button in the left menu. That opens a new section with four new option: Create New Project, Create Function, Deploy to Function App, and Refresh.
Click on the first option Create New Project. Select a local folder and a language; for this demo, I will use C#. This will create a few files and folder. Now let's create our Function. From the extension menu, select the second option Create Function. Create a Blob Trigger named UnzipThis into the folder we just created, and select (or create) Resource Group, Storage Account, and location in your subscription. After a few seconds, another question will pop asking the name of the container that our blob trigger monitors. For this demo, input-files is used.
Once the function is created you will see this warning message.
What that means is that to be able to debug locally we will need to set the setting AzureWebJobsStorage to UseDevelopmentStorage=true in the local.settings.json file. It will look like this.
The binding is attached to the container named input-files, from the storage account reachable by the connection "cloud5mins_storage". The real connectionString is in the local.settings.json file.
Now, let's put the code we need for our demo:
[FunctionName("Unzipthis")]
public static async Task Run([BlobTrigger("input-files/{name}", Connection = "cloud5mins_storage")]CloudBlockBlob myBlob, string name, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}");
string destinationStorage = Environment.GetEnvironmentVariable("destinationStorage");
string destinationContainer = Environment.GetEnvironmentVariable("destinationContainer");
try{
if(name.Split('.').Last().ToLower() == "zip"){
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(destinationStorage);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(destinationContainer);
using(MemoryStream blobMemStream = new MemoryStream()){
await myBlob.DownloadToStreamAsync(blobMemStream);
using(ZipArchive archive = new ZipArchive(blobMemStream))
{
foreach (ZipArchiveEntry entry in archive.Entries)
{
log.LogInformation($"Now processing {entry.FullName}");
//Replace all NO digits, letters, or "-" by a "-" Azure storage is specific on valid characters
string valideName = Regex.Replace(entry.Name,@"[^a-zA-Z0-9\-]","-").ToLower();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(valideName);
using (var fileStream = entry.Open())
{
await blockBlob.UploadFromStreamAsync(fileStream);
}
}
}
}
}
}
catch(Exception ex){
log.LogInformation($"Error! Something went wrong: {ex.Message}");
}
}
UPDATED: Thanks to Stefano Tedeschi who found a bug and suggested a fix.
The source of our compressed file is defined in the trigger. To define the destination destinationStorage and destinationContainer are used. Their value are saved into local.settings.json. Then because this function only supports .zip file a little validation was required.
Next, we create an archive instance using the new System.IO.Compression library. We then create references to the storage account, blob, and container. It not possible to used second binding here because for one archive file you have a variable number of potential extracted files. The bindings are static; therefore we need to use the regular storage API.
Then for every file (aka entry) in the archive the code upload it to the destination storage.
Deploying
To deploy the function, from the Azure Function extension click on the third option: Deploy to Function App. Select your subscription and Function App name.
Now we need to configure our settings in Azure. By default, the local.setting are NOT used. Once again the extension is handy.
Under the subscription expand the freshly deployed Function App AzUnzipEverything, and right-click on Application Settings. Use Add New Setting to create cloud5mins_storage, destinationStorage and destinationContainer.
The function is now deployed and the settings are set, now we only need to create the blob storage containers, and we will be able to test the function. You can easily do that directly from the Azure portal (portal.azure.com).
You are now ready to upload a file into the input-files container.
Let's Code Together
This first iteration only supports "Zip" files. All the code is available on GitHub. Feel free to use it. If you would like to see or add support for other archive types join me on GitHub!.
In a video, please!
I also have a video of this post if you prefer.
I also have an extended version where I introduce more the Visual Studio Extension to work with Azure Function. And explain more details about the Azure Function V2.
Sixty-five videos and one year later
One year already that I start sharing videos on YouTube, I've been blogging for many years and wanted to try something new. Creating videos seems like the next logical step. And this is how I decided to start sharing short videos to answer technical questions about Microsoft Azure on YouTube. This post, I will explain what I learned during the first year of my journey.
The Beginning
I started my YouTube channel in French. During the two first months, I published one video by week. I learned a lot through this period, how to prepare my code snippets, my files, and my screen. The biggest takeaway was that: preparation is the key. The more prepared you are, the more efficient you will be in front of the camera. I also discover that compare to writing a blog post; you need to pay more attention, way earlier in the process, to the tiny details.
One thing that I was looking forward was to be able to share my videos with all my clients/ friends/ community members. However, I couldn't because while some are perfectly comfortable in French (and only French), others only understand English. And this is why I decided to record all my videos in both French and English. I knew that would mean doubling the workload, so I adapted my schedule to publish every second week.
Doing all does videos I found my beat, my style, the things I like and dislike. I also spend a lot of time learning the YouTube platform through books and by watching other YouTubers. It's such an incredible resourceful community!
A New Camera
The software (Camtasia from TechSmith) I was using, and that I'm still using today, can record both your screen and your webcam. It keeps all the tracks separated, and this is great because you can change the size and position of the mortise in post-production.
My challenge was that I wanted to record both in 1080P (full HD) because during my intros and conclusion when the webcam input fill the screen I want good image quality. However, the software (or more likely my PC), did not let me record in 1080P (full HD) the screen and the webcam input.
To upgrade the quality of my recordings, I decided to use my DSLR instead of my webcam. Of course, it represents more work in post-production because now I need to synchronize the two videos sources, but now everything is 1080.
The famous algorithm
The more I was studying my statistics and reading comments (thank you all for your constructive comments by the way), I started to understand that having all my videos in French and English on the same channel was not the best idea.
First, for the YouTube algorithm, it was very confusing because it couldn't identify the video's language. Therefore, it's harder to do its job and make recommendations.
Secondly, it didn't cross my mind at the beginning, because I understand both languages, but for a subscriber that understand only one of the language, it's was very confusing and not very pleasing to have all those "others" un-useful videos in the feed.
After a very long (and hard) reflection, I decided to create a second channel and "move" all my French content to that channel. I say "move" but you cannot "move" contents on YouTube, and that was the principal reason why I was hesitating to split my channel. By creating a new channel, and re-uploading my older (aka French) videos I was losing all my stats (views, subscribers), and references.
The effect was immediate on the English channel as the views and subscribers exploded a week after. And on the other side, my brand new French channel didn't really catch-up. Never the less I intend to continue to create French content. ;)
Today's Status
As this post gets published, I have a cumulative total of sixty-five videos online and more than a thousand subscribers. I publish every second week, usually on Thursday, two videos on the same topic one in French on fr.cloud5mins.com and one in English on cloud5mins.com. I have a fantastic community super supportive, asking questions and suggesting topics.
I'm very thankful for all that positive feedback. To help me you can subscribe to my YouTube channel of course, or become one of my Patreon at patreon.com/fboucheros
What's Next
I'm definitely continuing to create videos. I plan to start doing some streaming on Twitch, where I will building cloud solutions.
For a project I just started, I need to create Azure resources from code. In fact, I want to create an Azure Container Instance. I already know how to create a container from Logic Apps and Azure CLI/PowerShell, but I was looking to create it inside an Azure Function. After a quick research online, I found the Azure Management Libraries for .NET (aka Fluent API) a project available on Github that do just that (and so much more)!
In this post, I will share with you how this library work and the result of my test.
The Goal
For this demo, I will create a .Net Core console application that creates an Azure Containter Instance (ACI). After it should be easy to take this code and migrate to an Azure Function or anywhere else.
The Console Application
Let's create a simple console application with the following command: dotnet new console -o AzFluentDemo cd AzFluentDemo dotnet add package microsoft.azure.management.fluent The last command will use the nuget package available online an add it to our solution. Now we need a service principal so our application could access the Azure subscription. A since way to create one is the use Azure CLI az ad sp create-for-rbac --sdk-auth > my.azureauth This will create an Active Directory (AD) Service Principal (SP) and write the content into the file my.azureauth. Perfect, now open the solution, for this kind of project, I like to use Visual Studio Code so code . will do the work for me. Replace the content of the Program.cs file by the following code.
using System;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;
namespace AzFluentDemo
{
class Program
{
static void Main(string[] args)
{
string authFilePath = "/home/frank/Dev/AzFluentDemo/my.azureauth";
string resourceGroupName = "cloud5mins";
string containerGroupName = "frank-containers";
string containerImage = "microsoft/aci-helloworld";
// Set Context
IAzure azure = Azure.Authenticate(authFilePath).WithDefaultSubscription();
ISubscription sub;
sub = azure.GetCurrentSubscription();
Console.WriteLine($"Authenticated with subscription '{sub.DisplayName}' (ID: {sub.SubscriptionId})");
// Create ResoureGroup
azure.ResourceGroups.Define(resourceGroupName)
.WithRegion(Region.USEast)
.Create();
// Create Container instance
IResourceGroup resGroup = azure.ResourceGroups.GetByName(resourceGroupName);
Region azureRegion = resGroup.Region;
// Create the container group
var containerGroup = azure.ContainerGroups.Define(containerGroupName)
.WithRegion(azureRegion)
.WithExistingResourceGroup(resourceGroupName)
.WithLinux()
.WithPublicImageRegistryOnly()
.WithoutVolume()
.DefineContainerInstance(containerGroupName + "-1")
.WithImage(containerImage)
.WithExternalTcpPort(80)
.WithCpuCoreCount(1.0)
.WithMemorySizeInGB(1)
.Attach()
.WithDnsPrefix(containerGroupName)
.Create();
Console.WriteLine($"Soon Available at http://{containerGroup.Fqdn}");
}
}
}
In the first row, I declare a few constants. The path of the service principal created earlier, resource group name, the container group name, and the image I will use. For this demo aci-helloworld. Then we get access with the Azure.Authenticate. Once we got access, it's y easy and the intellisense is fantastic! I don't think I need to explain the rest of the code as it already self-explanatory.
Got an Error?
While running you main in contour an error message complaining about the namespace not being registered or something like that ( I'm sorry I did not note the error message). You only need to register it with the command:
az provider register --namespace Microsoft.ContainerInstance
It will take a few minutes. To see if it's done you can execute this command:
az provider show -n Microsoft.ContainerInstance --query "registrationState"
Wrap it up
And voila! If you do a dotnet run after a minute or two, you will have a new web application running inside a container available from http://frank=containers.eastus.azurecontainer.io. It's now very easy to take that code and bring it to an Azure Function or in any .Net Core Application that runs anywhere (Linux, Windows, Mac Os, web, containers, etc.)!