In this post I will demonstrate how to test and debug Angular library. One of the scenarios where this might be helpful is when you developed a library and you want to test how it would behave before publishing it to a remote package registry like npm or other alternatives.
Another scenario is if you have a library that is already used by one of the applications and you need to debug some code that is dependent on library code.
What is required
Visual Studio code or any editor of your choice
Angular setup (to install Angular you need Node.js and npm package manager)
Open VS code and navigate to the root folder of the destination where you want to keep the code for this application.
In VS code new terminal type ‘cd destination-of-your-repo’ for example ‘cd C:\Users\dmitr\source\frontend\countries-ui’
Then type command ‘ng new name-of-your-repo’. In case you created the folder beforehand then you can run a command
ng new name-of-your-repo --directory=./ --skip-install
This would put application files into your folder without creating a nested folder structure.
Type ‘code .’ and it should open another VS code window with application code.
Next identify the folder where you package.json lives. You can run the command ‘dir’ to check if you are in the correct folder.
When you found it then run ‘npm install’
Let’s create a library app
We will place our library app inside the app that we created in the above steps. This is how Angular works in a sense you need to create a workspace that is like a parent app for other projects in out case an angular library.
Input this command in the terminal at the same location as the above app
Angular CLI should generate for us all the necessary files for the library.
Let’s create a consumer app
Using the same approach and following the same steps let’s generate a basic app that would be using our library app.
Once you are done we need to add the selector of our app into the html file of the consumer app.
Next in the workspace location you need to run a build command for the library. Worskpace is like a container for all the projects including the library. This is the way Angular organizes files.
Use command to build the library. This generates the build files in dist folder. *also make sure you run the npm install command first
ng build dg-library --prod
This creates dist folder with build files.
Let’s link these apps
After we have built the library app with prod flag we are ready to link these two apps.
For this we need to go to location of the library dist folder. For me this would be:
What happens is we have placed the library project files into a global node_modules location. You can run this command to find out the location.
npm root -g
The output location for me is
C:\Users\dmitr\AppData\Roaming\npm\node_modules
To complete the link of these apps we need to perform a similar steps in the consumer app.
In the consumer app use this command to complete the link.
npm link dg-library
Which is the same as the one we used for the library project but this time we specify the name i.e. dg-library.
This would add library files under node_modules folder of the consumer app.
To test simply run the consumer app.
ng serve -o
This should be the output in the browser.
Before we finish off let’s tidy up
In the consumer app run this command.
npm unlink dg-library
This would remove the link to from consumer to library app. Also remove library files from the node_modules in the consumer app.
And in workspace where you have library app you need to perform similar steps.
Go into dist folder of the library app.
cd C:\Users\dmitr\source\frontend\symlink-library\dist\dg-library
And run
npm unlink
like so
This should remove the build files of the library app from global node_modules. You can check by running this command and getting the location of global node_modules folder.
npm root -g
Conclusion
Hope these steps were clear for you.
In summary we have created one basic app, one library and linked these locally.
By using command npm link we are creating a symbolic link between two file objects. More details here and here.
This is handy if you have several projects and one of them consumes a remote library. If you have repository of that library you can manually hook in these and debug files on your local machine.
This could also be part of your routine before publishing your library to package manager of your choice.
The code of library is here and the code for consumer app is here.
In this post I will demonstrate how to implement state management in a simple Angular app. NgRx store and NgRx effects will be used.
What is state management
First of all let’s talk about the state. State in simple terms is like a memory of a particular moment in time. Whenever some action happens the state changes. In a software engineering context you can think of a state as various related data that describes a particular moment. We normally store this data in a database. So whenever it is required to perform further actions to the actions that occurred previously to that moment we retrieve that state from the database. In addition to this we can store or keep some state in the application cache. There are various ways you can work with the state. It would depend on the type of application for example web or desktop.
What we will be focusing on this post is state management in the web application. In the present time web applications are becoming more advanced, this means more functionality, faster response time, busier pages etc.
To cope with this extra load of information we can use a state management framework like NgRx. This framework is based on the Redux pattern which is essentially a one way dataflow.
The concept of this pattern is that you replace the state object rather than modify it. This way the state stays immutable. Redux adheres to three principles.
What is required
Visual Studio code or any editor of your choice
Let’s create an Angular app
Open VS code and navigate to the root folder of the destination where you want to keep the code for this application.
In VS code new terminal type ‘cd destination-of-your-repo’ for example ‘cd C:\Users\dmitr\source\frontend\countries-ui’
Then type command ‘ng new name-of-your-repo’. In case you created the folder beforehand then you can run a command
ng new name-of-your-repo --directory=./ --skip-install
This would put application files into your folder without creating a nested folder structure.
Type ‘code .’ and it should open another VS code window with application code.
Next identify the folder where you package.json lives. You can run the command ‘dir’ to check if you are in the correct folder.
When you found it then run ‘npm install’
Let’s configure our app to use NgRx
First thing first we need to download it as a npm package. Use this command ‘npm i @ngrx/store’
This package would give us an ability to create a store where we will keep the state. This will be treated as a single source of truth across the application and the functionality that it will cover. The package would also contain code and types for reducer, action and selector. These are part of NgRx state management lifecycle.
Actions describe unique events that are dispatched from components and services.
State changes are handled by pure functions called reducers that take the current state and the latest action to compute a new state.
Selectors are pure functions used to select, derive and compose pieces of state.
State is accessed with the Store, an observable of state and an observer of actions.
We would also need the ngrx effects npm package installed. Use this command ‘ npm i @ngrx/effects’
The effects package gives us an ability to correctly handle side effects in NgRx. Some actions in our application can have side effects. These are any effects that happen in the outside world that our application consumes or handles. These could be external devices, api that represent the external state.
When downloading these packages take into account the version of Angular installed on your machine. In other words make sure the version of NgRx you are downloading is compatible with the version of your Angular. To install particular version of npm package the format should be ‘npm install package-name@version-number’
Let’s configure app module to use NgRx
To initialize the store we need to add StoreModule.forRoot({}, {}) We are using forRoot since AppModule is our root application module.
When we use forRoot this means we are defining the main store of the application. Two arguments consist of reducers object and store config.
Then we register root level effects. EffectsModule forRoot takes an array of effects.
I also added ngrx store dev tools, additionally you would need to install a browser plugin. More details here.
There are various options that you can configure for dev tools. The one that you can see in the image is maxAge. This is the number of actions that are allowed to be stored in the history tree of dev tools. In a practice this is the number of state changes that you can replay. In other words you can check retrospectively and investigate what has been affected by one of the actions.
The app concept
The concept of the app will be to have two drop downs. First where you will be able to select a region. Either Asia or Europe. In the second dropdown user will be able to select the actual country. This will display some detailed information about the country.
There would be a component GeographicNavigatorComponent which will have a role of container/smart component. It will host two presenter/dumb components CountryListComponent and RegionListComponent. Also it will host a CountryDetailsComponent component that will only display the data like presenter/dumb component but at the same time will have an access to the store. This is just a basic configuration and you are more than welcome to modify it, so it answers to the best practices.
Additionally there will be a shared wrapper component for dropdown.
Let’s move on to adding some actual code
First of all we will add app state interface. It will represent the structure of our main state object.
It will comprise out of countries and regions object properties each representing their piece of the main state object.
Each piece of the state will have an initial state object. It provides a starting point when the application is first executed and potentially does not have any modified state. To make it predictable we will define the required values explicitly. To make it immutable we will declare it as constant. To have an initial state and declare it as a constant is a normal process. All of this assists in keeping the state in order and being able to quickly detect what happened in case there is an error.
The way we will structure our app in terms of state is we will create a state folder for each section of the state. This is in addition to countries and regions having their own modules.
We will also need to add some models. These would be used to construct our binding model. Also these would be reused in stores and all sorts of data manipulation.
To get our data we will be using an online countries api available at https://restcountries.com which is provided under Mozilla Public License. This means it’s free and open-source software.
And for that we need a service. Where we will make calls. Format and map data as we need it.
Let’s have a look on main components structure of the app
As I mentioned previously there will be a host or parent component called GeographicNavigatorComponent.
It will be a main point of communication with the store. So that whenever the data is updated or refreshed for any of the reasons it will communicate to all other components that are part of this component or chain of components. Respectively if any action that is triggered in the presenter/child component then it should be outputted to the parent, so it can trigger the required action to complete the presenter action.
This is the flow of the container-presentation pattern. This can be handy to divide growing stateful logic into more organized form, so it’s easier to track where problems occur.
To note the important bits of using ngrx. Whenever we want to get a particular piece of state we use selectors. Whenever we need to do something we dispatch an action. Actions can be triggered by the user . They also can be triggered by the external systems actions like api requests and other devices. These actions are then processed in reducers which perform transition from one state to another.
What you need to make sure is so that your component is subscribed to the correct selector. Because whenever some other component dispatches an action and it finishes processing in the reducer that state transition would be reflected in the store. Hence the component with the selector would automatically select this.
Let’s have a look on how things work with ngrx
What we will do is we will review the scenario where all the pieces of ngrx pattern are used.
We will start from countries effects. Effects are they way of communication of our system with external sources.
In our case this effect method would be called by GeographicNavigatorComponent to get a list of countries. When a call to http api is successful we will call a success action. A common approach for a single operation is to have three actions. One is the load action, then success and failure action.
As you can see we call the getCountryListSuccess action with countries as an argument or props method as it’s called according to ngrx docs.
For each action we have an associate state change function in the reducer. In that function we either assign a part of the state or perform some changes to that piece of state. To use the correct words we process the transition of the state rather than change. Since you shouldn’t mutate the state, it should be immutable. The way we can achieve this is by replacing the whole state object. You can see it when we return an object with the state object that we apply spread operator to. And reassign any state object property to the required state. I promise it’s easier that it sounds when you start working with it.
Another point I want to mention to you. The way we configured this application modules ngrx is we have a main app.module that uses StoreModule and EffectsModule forRoot. It’s pretty self explanatory and you might have already seen something similar that Angular uses for routing. It’s the method that is invoked initially when the application loads. It provides the initial configuration with reducers, actions, selectors.
Then there is also forFeature option. This can be used with lazy loading of modules functionality. Also gives you an ability to define its own piece of state. So you can focus on a particular feature of the app and organize your code. This doesn’t create any new store context. It uses the same as the one that was created with forRoot. It loads that piece of state whenever it is required by the application area. As in our example we have a countries feature state. It’s loaded once we activate this part of the application by means of using this component in the GeographicNavigatorComponent template. The example that could give better understanding and will have more justification to use forFeature option is for the case when app redirects from list component to details or form component. The form component doesn’t need to load it’s state without redirection to it by the user’s actions.
Conclusion
The main concept of Ngrx is to have a single source of truth in the format of an object. It is called the store. Basically the store can contain different feature stores or parts that construct the main store object.
The only way to change the store is through dispatching an action. Which is then processed or reduced in the reducer. By having this predefined unidirectional flow it is easier to predict what would happen or what caused a particular transition in the state. Although the correct way of saying it is that action produced these new results. Since when the data was processed in the reducer it effectively created a new state. This way the immutability of the state is supported.
This tutorial demonstrated the beginners level application with standard ngrx functionality. Overall this should be a good start for someone new or it could be a reference point for future developments.
Visual Studio, Community version will be absolutely fine
Dotnet Cli which is normally installed as part of the Visual Studio
*If you have a Microsoft account this should be very easy since Nuget and Visual Studio use the same account.
Let’s get straight into action by creating the net core application
You can either have a Nuget package as a dedicated application for packages or it being a project within an application. For instance if you have a class library containing shared contracts or models within an api application.
We can start by either creating a Blank Solution and then manually adding a class library or choose a Class Library straight away. For the latter the solution is created automatically. However the name of the class library would be the same as the class library itself.
I will do the Blank Solution option and manually add the class library.
Class Library should be of Net Standard, as it provides compatibility with different projects of various Net implementations.
Let’s get our package ready to be published
I think it’s a good time to mention several points in regards to versioning of the nuget packages. A recommended approach is to use SemVer or the full name is Semantic Versioning. It follows the format of Major.Minor.Patch[-Suffix]
For example 1.2.3, 1.5.9-prerelease. In general whatever suffix you put the Nuget package manager will put it under prerelease type. This means you would have to tick ‘Include prerelease’ in Visual Studio when searching for nuget packages.
When searching make sure you got the correct source selected. You can check the settings by clicking on cogwheel icon or go to Tools > Options > Nuget Package Manager > Package Sources
Okay if you open properties for the Class Library you created. Right Click >Properties. In there you would be able to see a place where you can set the required version number. It would be under Package option.
Let’s pack it
In the solution explorer of Visual Studio right click on your class library and select Pack.
This should produce a nupkg file in Debug(Release folder on server or in production mode) folder that is inside the bin folder which in its turn is placed in the class library root folder. For example C:\Users\dmitr\source\backend\DgVsNuget\DgLoggers\bin\Debug
This also generates a nuspec file but instead of a bin folder it would be in the obj folder. For example C:\Users\dmitr\source\backend\DgVsNuget\DgLoggers\obj\Debug This file contains package metadata like id, package version, authors etc. Some details we set when we edited values in class library properties under Package option.
Let’s publish it
Firstly let’s check if we got net core cli installed. For this we need to open Command Prompt and type in ‘dotnet’ command.
Then we need to navigate to the folder where our nupkg file lives. For me it’s C:\Users\dmitr\source\backend\DgVsNuget\DgLoggers\bin\Debug
The complete command should look like this ‘cd C:\Users\dmitr\source\backend\DgVsNuget\DgLoggers\bin\Debug’
Then we need to run the command which would contain the name of your package file, as it is in the bin/Debug or bin/Release folder, the api key that you generate in your account with Nuget and also the source url.
For me the command would look like so ‘dotnet nuget push DgLoggers.1.0.1-prerelease.nupkg –api-key qz2jga8pl3dvn2akksyquwcs9ygggg4exypy3bhxy6w6x6 –source https://api.nuget.org/v3/index.json’
Api key can be generated on your account at nuget.org For Glob Pattern enter star *
For source we put https://api.nuget.org/v3/index.json because we use Nuget server. The url is the same format with possible version differences.
Once you assembled the command type it into command prompt/net core cli and hit the enter.
Once it successfully pushes you can view it in your Nuget Account. For a brief moment it would be in Unlisted state. Once Nuget approves it will go into Published state.
I would recommend setting it as Unlisted. This way it’s not searchable. You can still install it with the version you need. For this you need to go to your Nuget Account and select Manage Packages. There you would see Published Packages and option to go to package details. Once you are on the manage page for the current package you can deselect the listed option.
Let’s use this package
Since we set this package as Unlisted we need to use the package manager console and use the install command where you specify the version.
Firstly create a new Asp.Net Core Application as web api.
Once the application is created go to Package Manager Console and type in Install-Package Your-package-name -Version the-version-you-need For example Install-Package DgLoggers -Version 1.0.1-prerelease
After that initiate package TextLogger class and call Log method with supplied message in the controller of your choice.
[HttpGet]
public IEnumerable<WeatherForecast> Get()
{
//here we initiate the package logger
var pckgLogger = new TextLogger();
pckgLogger.Log("test logger");
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
})
.ToArray();
}
When you run the application check the Output window for this message.
Let’s talk about deleting the package
As it’s stated on Microsoft page nuget.org does not support permanent deletion. There are exceptions but not many. And you have to contact the Nuget team.
Despite that you can unlist the package. Which essentially means it’s still active and can be installed but not searchable. So you have to know that such a package exists and know the versions that are available.
Even though Nuget Cli provides a delete method, what it does is it simply unlists the package. The package has to be active/published.
Just to give you an idea of the process. Let’s go back to nuget.org and activate/list our package.
The required details we need are the same as for when you publish the package. For this we need to generate the api key at nuget.org. When generating the api key make sure Unlist package scope is ticked. We also need to include the source as https://api.nuget.org/v3/index.json into the delete command.
The complete command should look like so dotnet nuget delete DgLoggers.1.0.1-prerelease.nupkg –api-key qz2jga8pl3dvn2akksyquwcs9ygggg4exypy3bhxy6w6x6 –source https://api.nuget.org/v3/index.json
Let’s go back to Visual Studio and run it in the Package Manager Console. Alternatively you can use command prompt/net core cli.
So if you go back to your package on nuget.org it should be set as unlisted.
Let’s talk about updating the package too
As you can imagine the process is the same as when we publish the package. We need to change the version in the Project Options Package section. Then pack it. To note we need to use the newly generated nupkg file name. For instance DgLoggers.1.0.2-beta.nupkg
Then run the command in net core cli/command prompt.
If it’s in status listed at nuget.org then Nuget Package Manager in Visual Studio would pick up that there is a new version available. Simply update it from there.
If it’s not listed then you need to do it manually by using the command from nuget.org for your package with a specific version.
Conclusion
Having nuget packages with the code that would be reused is a great way of preventing code duplication.
Possibly you just want to isolate a particular piece of code from the main source code. That works too.
You can think that the code in the package follows the object-oriented principle of abstraction. As long as we know what the package does and what methods are available to use there is no need to know details of package code.
It still depends what you will put in there, might be simply some contracts or models that you want to share between projects. It’s up to you. Hopefully something reasonably sized.
You can find a code for the package here and the consumer application here.
In this post I will explain how to set up continuous deployment of Angular library as npm package with Azure DevOps.
This post will also contain instructions of how to configure the build pipeline, how to configure your Angular application as developer and as a consumer of this package.
I will build two Angular applications for this purpose, so it will be clear to distinguish between the library and consumer applications.
If you are using npm packages you need to host them somewhere. There are several options available. Normally if it’s for public use then it’s free if private or corporate then it obviously costs money. That is their business model.
If you already have an Azure subscription then you have an option to store and host them there. This might be more cost effective but it depends on your needs and availability.
For me it was handy to have an Azure DevOps repo that would have continuous integration trigger enabled, so when I commit a change it would build it and publish it for me.
Once I set up the build pipeline it would do the main stuff related to publishing for me. So the only thing that I would have left to do is set the correct version from my local machine.
Let’s start with initiating a repository for an Angular library on Azure DevOps
Once the project is created select Repos from the left side menu. Clone this repo to your computer by copying the url link and using it with your choice of Git GUI.
If you haven’t configured your environment with Angular yet then please do so.
Install Visual Studio Code on your machine. Please use the link below.
The name of this app should match the folder name. This way it will use the root folder and won’t nest it within an extra folder.
As a result you get minimum files created plus it is created straight away in the repo root folder. Pretty neat.
Open the project code files in new the window by typing ‘code .‘
Then type ‘ng g library your-library-name’ for example ng g library azure-npm-ng-lib
Once you have done this and you are happy with the output please commit-push to Azure DevOps repo.
Before we head on to create a pipeline we need to create the feed that we will be using to store and manage our packages
Select Artifacts from the left side menu of the project on Azure DevOps. Click on Create feed and type in the name for your feed. For Visibility it should be Members of name-of-your-organisation. Make sure Upstream sources Include packages from common publish sources is ticked. As for the Scope select Organization.
All of these settings are applicable to our current needs. If you want to limit who can use the feed then you need to select the appropriate Visibility and Scope.
We are ready to create a build pipeline
Go to the Pipelines section and click Create Pipeline. Then select Classic Editor.
Since we created our repo with Azure DevOps we need to select Azure Repos Git. As for the template, Empty Job will fit our needs.
Because we will use Angular Cli we need Node.js. So the first task will be Node.js Installer.
We also need angular cli install, npm install, npm build and npm publish tasks.
For angular cli install task we need to set Command as custom, in Command and arguments enter ‘install @angular/cli -g’
For the npm install task we will leave it as it is.
For the npm build task we need to set for ‘Working folder that contains package.json’ the location of the root of the library folder within the angular project e.g. ‘projects/azure-npm-ng-lib’ Also in here for ‘Command and arguments’ we need to type ‘run build your-library-name –prod’ e.g. run build azure-npm-ng-lib –prod
For npm publish task we need set ‘Working folder that contains package.json’ with the destination after the application was built. We know that build files are placed by default into the ‘dist/project-name’ folder (you can change this outputPath in angular.json). Hence the location of this library would be at ‘dist/azure-npm-ng-lib’ and the full path ‘$(System.DefaultWorkingDirectory)/dist/azure-npm-ng-lib’ As for ‘Registry Location’ we want to set ‘Registry’ and for ‘Target registry’ we need to pick a registry that is preferably organization scoped.
Additionally we need to set CI trigger and Agent Specification for this pipeline.
Different agent specifications would have different build processes and different paths. For our purpose we will use ubuntu-18.04 agent specification.
Click on the Pipeline section and set the appropriate agent specification for your needs.
Select dropdown of Save & Queue and select Save.
Before we can run it we need to allow this pipeline to make changes to Artifacts feed-registry that we selected in the npm publish task as Target registry.
Once this is done you need to go back to Pipelines and click Run Pipeline. Check the images for more info.
Click on the dropdown of Save & Queue.
If the build is successfull we can find our package under Artifacts side menu option.
*make sure tsconfig.lib.json library level enableIvy is set to false i.e. “angularCompilerOptions”: { “enableIvy”: false } *and package.json contains scripts part with ng build i.e. “scripts”: { “ng”: “ng”, “build”: “ng build” }
Next we need to configure our machine to use artifacts feed-registry for our npm package
Since we will be storing our npm packages in Azure DevOps feed we need to configure our client machine to use this feed.
There is a helpful section of how to do this under Artifacts. Click on Connect feed and then select npm. On the newly opened section click ‘Get the tools’. This should open a slide menu with description of how to do it.
Since we already configured Node.js and npm at the beginning we can skip to Step 2. Copy the command ‘npm install -g vsts-npm-auth –registry https://registry.npmjs.com –always-auth false’
Open a new terminal in VS code and run it. As an extra check if it was installed you can run this command ‘npm list -g –depth 0’ and see if it’s in the list.
Let’s create another Angular app that would consume our npm package
Before that we need to initiate a repository at Azure DevOps. Select Repos in left side menu. Click a dropdown next to the name of your current repository. Select New Repository.
Clone it to your machine and keep in mind the location.
Open VS code and navigate to that location. Run this command
ng new AzureArtifactsNpmConsumer --directory=./ --skip-install
Make sure the name matches the name of the root folder. This way your app won’t be nested within an extra folder.
Next we need to set up our Angular project according to the provided instructions at Azure DevOps Connect to feed section
Add npmrc file to your root project. In my case this would be ‘C:\Users\dmitr\source\frontend\AzureArtifactsNpmConsumer\.npmrc’
Paste in the first command from Connect to feed description into project level npmrc file.
If you open a user level npmrc file which is located at ‘C:\Users\yourName\.npmrc’ and either take a screenshot or copy it somewhere as a backup before it is be modified.
Then run this command in the same project ‘vsts-npm-auth -config .npmrc’. You will be prompted with a login window. Follow the onscreen instructions to complete the authentication process.
Now if you close and open again the user level npmrc file you would be able to see the difference.
There’s another thing you can do to assist the process of the authentication. Open package.json of the project you are working on and add this bit “refreshVSToken” : “vsts-npm-auth -config .npmrc” into the scripts section. So it would look like so “scripts”: {“refreshVSToken” : “vsts-npm-auth -config .npmrc”}
So in case you are required to refresh the token you can write in terminal ‘npm run refreshVSToken’ and it should reauthenticate you/refresh the token at user level npmrc file.
Basically what is happening behind the scene is every time we run npm install the authentication process kicks in. The token that is in the npmrc file at user lever location is exchanged with registry-feed, so that user machine gets authenticated.
Let’s test if we can install our library package to our Angular consumer app
Please go to the Artifacts at Azure DevOps from there into the package details page and copy the full command to install the package. For example ‘npm install azure-npm-ng-lib@0.0.1’
Use this command in the Angular app where you want to access this npm package.
Next let’s add some code to Angular app and test if we can use npm package code
So it would look like the image below. Then run ‘ng serve -o’ and open the developer tools. Check if the console/debugger tab contains our ng library npm package message.
Next let’s apply a change and increase the version of lib package
In terminal navigate into folder where library files live and type in ‘npm version 0.0.2’ e.g. C:\Users\dmitr\source\frontend\AzureArtifactsNpmPublish\projects\azure-npm-ng-lib> npm version 0.0.2 This way the package.json version of the library would be updated.
Optionally change the message.
export class AzureNpmNgLibService {
constructor() { }
getMessage(){
return "wow what an update! is it the font or a colour :-D"
}
}
Then commit-push these changes. This should trigger a new build.
This should update our existing package to 0.0.2. You can check this at the Artifacts section.
Next let’s update our Angular consumer app to the latest version of our npm package
In the terminal type in ‘npm outdated’. This would list all the packages that have updated versions.
To update the package to the latest version we need to use ‘npm i azure-npm-ng-lib@latest’ You can also target a particular the version then you need to write what version number like so ‘npm i azure-npm-ng-lib@2.0.0’
Once it is updated to the latest version it is removed from the outdated list. If it’s not the latest it will still be there. For instance if you have 3 versions and your app uses version number two.
As you have noticed there are a lot of tips available on Azure DevOps that can guide you through the whole process. As long as you are aware of the process of how npm packages are published to the npm registry all of this should make sense.
If you want an original guide from Microsoft you can find it here.
In my opinion this combination of npm packages and build pipeline with Azure DevOps can be extremely helpful.
It’s one of those things you set up once and you can enjoy the effectiveness of it from then on. In essence you have moved the main heavy lifting(building and publishing) to a cloud based server, so it’s doing it for you.
Moreover you can see the history and related description of changes in a more familiar format.
You can find code for ng library package here and consumer app here.
In this post I will demonstrate how to set up everything you need in order to be able to publish the npm package and use it as a js module in another application.
when you are typing password it wont show it but it’s there….magic
*npm config file for user – .nprmc, can be found at this location C:\Users\YourName make sure the registry is set to npm like so registry=https://registry.npmjs.org/
4 step: Create npm package
navigate into the root folder using > cd folderDestintation e.g. cd C:\Users\dmitr\source\frontend
create new directory by typing > mkdir name-of-your-package
navigate into newly created directory using > cd name-of-your-package
initiate the package with package.json by typing > npm init
use this command to go to code > code .
5 step: Create Angular app
It is a straightforward task once your local environment and workspace is set up.
*workspace here I mean the area on your computer rather than angular workspace
Open new window in VS code
Go to menu option Terminal > New Terminal
Optional: Install the latest Angular version globally with this command > npm install -g @angular/cli
navigate into the root folder using > cd folderDestintation e.g. cd C:\Users\dmitr\source\frontend
type in > ng new your-app-name
go to the root folder where app was created by typing > cd your-app-name
then by typing > code .
new window should open with the app
*ng-new creates a new app that is default project for new ng workspace *ng serve -o – opens the application in the browser. Prefix -o opens the application in the browser.
Ok we finished setting things up.
Next we need to add some code to our package.
In the package application add new file index.js, it’s going to be our entry point as we previously configured in package.json.
exports.printMsg = function () {
console.log("This is a message from the demo package");
}
The whole project should look like this.
We are ready to publish.
Since we logged-in in one of the previous steps we will go straight to publishing.
Unless you have a paid account type with npm we can only publish public packages. In other words a package that can be seen and used by everyone.
Let’s go to npm and check whether our package is there.
Ok let’s get back to the Angular app and let’s use our newly published npm package.
We need to modify the main app component’s ts file. Then we install our npm package using > npm i your-package-name
You can find a complete command on your npm account on your package details page.
In order to consume our package module with Angular we need to use Node.js require function. We also need type definitions for the node. These should be installed once you run the ‘npm install’ command in the terminal. To be sure you can run > npm i @types/node
For these definitions to be found we need to reference them in tsconfig.app.json file. To note it’s tsconfig.app.json and not tsconfig.json.
Check how to do it for your Angular version. Mine is 11. To check angular version either run ‘ng –version’ or ‘npm list -global –depth 0’
Let’s run the Angular app and see if it uses our npm package.
In the terminal run ‘ng serve -o’ command. This should open the app in the browser. Let’s hit F12/open developer tools and check in the console for our package message.
Next thing is let’s update our npm package and re-publish it.
For this we need to change the version by using the ‘npm version 2.0.0’ command. Npm follows semantic versioning, more details here.
Let’s change the console.log message as well.
Then we need to use the command ‘npm publish’ again. You can check on your npm account if the package version was changed.
Next let’s update our package in the Angular app.
We can check if there are any outdated packages by using the ‘npm outdated‘ command in the terminal of our Angular app.
There are multiple but for now we keep the focus on our npm package.
To update we will use ‘npm i your-package-name@latest’ or ‘npm i your-package-name@1.0.0’ with a particular version.
Alternatively if you want to update all the outdated packages then use the ‘npm update’ command. If nothing gets updated then try ‘npm update -dd’ which will give you more details. Possibly you would need to adjust the maximum version in the package.json file.
Let’s run the Angular app again and see if it uses the latest version of our npm package.
Same as previously ‘ng serve -o’, F12/developer tools and check the Console for the message.
Conclusion
As you can see, creating the npm package is straightforward. Npm documentation is clear and concise.
It is very handy to have regularly used pieces of code placed into the package. You can reuse it anywhere you want. You can share it. Plus it is easy to locate it in case you need to modify it. Finally you are avoiding code duplication. Hence less code. It is especially valuable for front end apps.
Source code can be found here for npm package and here for Angular app.
In this post I will explain how to create a service bus topic, subscribe to it and consume it. This is quite similar to a post about Azure Service Bus Queue with MassTransit.
The case study would be around the POS(point of sale) system. Here is a Microsoft example that I have based this tutorial on.
The main difference is that the message in the Queue is consumed by one or more competing consumers. It is always one message to one consumer though – one to one relationships. Whereas with Topics the message is available to one or many subscriptions. Each subscriber can receive a copy of the same message. In a real world example this could be different users, different systems and so on.
As with the Point of sale system example. You can have an Inventory management system that tracks when stock needs to be replenished and Management a Dashboard to view details of their sales.
What is required
Azure Subscription
Visual Studio
Net Core Web App
Let’s start by creating a Net Core App
Open Visual Studio and choose ASP.NET Core Web Application. Choose an API project template.
This project is going to be our Sender according to Microsoft resources or Producers according to Masstransit documentation.
In the same solution add New Project and select Class Library (.Net Standard) This project will contain our Contracts. You can read a definition for it here or here.
The reason why we add the class library of Net Standard is because it provides a uniformity in the .Net ecosystem. In other words if you got two projects of different .Net platform versions(.Net Framework 4.5 and .Net Core 3.1) thanks to .Net Standard these would be able to share a particular class library without any issues.
Now we need to add multiple Consumers or Receivers. In essence this is where our message would end up going. Or you can process it and send it further where you want to.
Add multiple new web app projects to this solution and select Empty template. For me it’s easier to track Consumers with name to contain a number e.g. AsbMassNetCoreTopic.Consumer, AsbMassNetCoreTopic.Consumer1 etc
Next thing is to add several Nuget packages
We need to install several MassTransit related packages in addition to Azure Service bus.
Right click on the project and select Manage Nuget Packages. In that tab search for each of these packages and install them to Sender and Consumer projects.
Installed Nugets should look like this.
Next thing we would be adding the code to publish and consume message
In the Controller folder of the Sender project add a new controller. I will call it PurchasesController. Next add the HttpPost method NewPurchase.
Action method should look like this.
[HttpPost("new")]
public async Task<IActionResult> NewPurchase()
{
var purchaseItems = new List<PurchaseItem>
{
new PurchaseItem
{
PurchaseItemId = Guid.NewGuid(),
Timestamp = DateTime.UtcNow,
Name = "Bus Mass Transformer High Spec",
Amount = 1,
Price = 100.00m
}
};
await _publishEndpoint
.Publish(
new Purchase
{
PurchaseId = Guid.NewGuid(),
PublicPurchaseId = $"Id_{_random.Next(1, 999)}",
Timestamp = DateTime.UtcNow,
PurchaseItems = purchaseItems
}
);
return Ok();
}
We need to create a contact for Purchase. In the Contracts project add a new class and call it Purchase. Add all properties that you require. I have also included a PurchaseItem contract. It’s up to you how detailed you want to make it.
The result should look like so:
public class Purchase
{
public Guid PurchaseId { get; set; }
public string PublicPurchaseId { get; set; }
public DateTime Timestamp { get; set; }
public IEnumerable<PurchaseItem> PurchaseItems { get; set; }
}
And for the PurchaseItem
public class PurchaseItem
{
public Guid PurchaseItemId { get; set; }
public DateTime Timestamp { get; set; }
public string Name { get; set; }
public int Amount { get; set; }
public decimal Price { get; set; }
}
Now it ‘s time to modify the Startup file of the Sender project. This is the place where we will add a service bus, MassTransit, register and define endpoints. An option to consider if you are doing it for production is a separate Startup file specific for Service bus and MassTransit setup. Anyway let’s keep things simple for now.
This is how it should look like:
public void ConfigureServices(IServiceCollection services)
{
var connectionString =
"Endpoint=sb://servicebustestnetcore.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=blablasharedaccesskey";
var newPurchaseTopic = "new-purchase-topic";
// create the bus using Azure Service bus
var azureServiceBus = Bus.Factory.CreateUsingAzureServiceBus(busFactoryConfig =>
{
busFactoryConfig.Host(connectionString);
// specify the message Purchase to be sent to a specific topic
busFactoryConfig.Message<Purchase>(configTopology =>
{
configTopology.SetEntityName(newPurchaseTopic);
});
});
services.AddMassTransit
(
config =>
{
config.AddBus(provider => azureServiceBus);
}
);
services.AddSingleton<IPublishEndpoint>(azureServiceBus);
services.AddSingleton<IBus>(azureServiceBus);
services.AddControllers();
}
To note, because we are using topics we need to use IPublishEndpoint rather than ISendPointProvider like you would do for Queues.
We’re done with Sender for now, so let’s move to one of the Consumer projects.
Add a new class and call it PurchaseConsumer. Inside add this code.
public class PurchaseConsumer
: IConsumer<Purchase>
{
public Task Consume(ConsumeContext<Purchase> context)
{
System.Threading.Thread.Sleep(60000);//60000 one minnute
return Task.CompletedTask;
}
}
Next important part is to add a HostedService that would start and stop our service bus.
Add a new class and call it BusHostedService.
public class BusHostedService
: IHostedService
{
readonly IBusControl _busControl;
public BusHostedService(
IBusControl busControl)
{
_busControl = busControl;
}
public async Task StartAsync(CancellationToken cancellationToken)
{
await _busControl.StartAsync(cancellationToken);
}
public async Task StopAsync(CancellationToken cancellationToken)
{
await _busControl.StopAsync(cancellationToken);
}
}
As with Sender project Consumer project Startup also requires some code for Service Bus and MassTransit to work.
Note: Topic name should match between Sender and Consumer.
public void ConfigureServices(IServiceCollection services)
{
var connectionString = "Endpoint=sb://servicebustestnetcore.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=blablasharedaccesskey";
var newPurchaseTopic = "new-purchase-topic"; // need to make sure the topic name is written correctly
var subscriptionName = "new-purchase-topic-subscriber";
services.AddMassTransit(serviceCollectionConfigurator =>
{
serviceCollectionConfigurator.AddConsumer<PurchaseConsumer>();
//Consumers - Receivers
//Message Creators - Senders
//would normally be in different applications
serviceCollectionConfigurator.AddBus
(registrationContext => Bus.Factory.CreateUsingAzureServiceBus
(configurator =>
{
var host = configurator.Host(connectionString);
configurator.Message<Purchase>(m => { m.SetEntityName(newPurchaseTopic); });
/*
For a consumer to receive messages, the consumer must be connected to a receive endpoint.
This is done during bus configuration, particularly within the configuration of a receive endpoint.
https://masstransit-project.com/usage/consumers.html#consumer*/
configurator.SubscriptionEndpoint<Purchase>(subscriptionName, endpointConfigurator =>
{
endpointConfigurator.ConfigureConsumer<PurchaseConsumer>(registrationContext);
});
}
));
});
//need to always start the bus, so it behaves correctly
services.AddSingleton<IHostedService, BusHostedService>();
}
Next we need to follow the same process of creating classes, modifying Startup files for the rest of the Consumer projects.
In my case I’ve got another two to do.
The only difference would be in the value of subscription in the Startup file of the Consumer. I have just added a number in respect to the Consumer project number.
var subscriptionName = "new-purchase-topic-subscriber_1";
Next thing we need to create Azure Service Bus
For this we need to go to Azure Portal home. Select Create a Resource.
On the next screen choose Integration then in the right pane menu Service Bus.
When you are creating Service Bus the things to consider are Resource group, Location and Pricing tier.
For Resource groups either create a new or select the existing one. If it’s part of existing then you can manage it with other resources in that group.
Normally Location should be local to you as different regions/locations have different regulations, latency etc. You can either use the same region or pair them. For example your web app uses UK South, so you would consider using UK South instead of East US for your service bus. If you are pairing then consider using UK South and UK West.
However there are no limits as to what region and location to use. As long as it answers your personal or business requirements. You can read more about regions here and here.
If you consider using different regions then check out this Microsoft article.
As for the tier let’s use the Standard pricing tier since it always works and it’s easier to set up.
Note: Azure charges money for this tier. So make sure you use it accordingly and delete the service bus once you stop playing with it.
Once it’s successfully deployed we need to go and retrieve the connection string for our application. Go to the resource and select the service bus namespace.
On your service bus find Settings and select Shared access policies. Then select RootManageSharedAccessKe and copy Primary Connection String.
We need to paste the whole string into the connectionString variable value in the Startup file of Sender and Consumer projects.
Next let’s test what we created
Since we have Sender and Consumer in the same solution we need to set this solution to run multiple projects. Right click on the solution, select radio button for Multiple startup projects. From the dropdown of Action columns select which projects to run.
To simplify things disable SSL for Sender and Consumer projects. On the Sender project right click and select Properties. In the newly opened tab select Debug and find Enable SSL checkbox. Untick it and save it.
Run the solution.
Because we need to hit HttpPost method we would require some sort of API client tool. For this example let’s try out Postman.
Let’s grab the URL of our Sender project and put it into the Postman tab with HttpPost method type selected. We also need to prefix it with controller route.
http://localhost:58852/api/purchases
It should return Status 200 OK if everything went ok.
Next let’s check our Azure Service Bus on Azure portal
When you hit the Http Post endpoint a topic with provided name will be published. Service bus will create a required amount of subscriptions. Because we configured three consumers the report should show three subscriptions.
Each subscription will be assigned the message that Sender sends.
Because we suspend current thread for a minute the subscription Consumer will process each new message once this time has passed. So if you hit that HttpPost method several times all these messages will be added to a particular topic subscription one by one. However the message will start to process after one minute from the moment it was added.
public Task Consume(ConsumeContext<Order> context)
{
System.Threading.Thread.Sleep(60000);//Wait for one minute
//by returning a completed task service bus removes message from the topic
return Task.CompletedTask;
}
By returning a completed task this message will be removed from the topic subscription.
Before we wrap up we need to tidy things up
This is especially the case because we are using the Standard pricing tier for this Service Bus namespace.
There are several options as to how to do it.
You can delete the Service Bus namespace on the page itself.
Another option is to delete a Resource group altogether.
Main idea to take from here is that you can send the same message to multiple subscribers. You can chain a subscriber with a queue, so that it processes the way you want and it has all the characteristics and tools like a queue.
In this post I will explain how to create a bus queue and consume it. The case study would be around the order system. Let’s imagine a system where you can create a new order for whatever reason in the industry you like. This could be an e-commerce website, an airplane or bus ticket ordering website.
What is required
Azure Subscription
Visual Studio
Net Core Web App
Let’s start by creating a Net Core App
Open Visual Studio and choose ASP.NET Core Web Application. Choose an API project template.
This project is going to be our Sender according to Microsoft resources or Producers according to Masstransit documentation.
In the same solution add New Project and select Class Library (.Net Standard) This project will contain our Contracts. You can read a definition for it here or here.
The reason why we add the class library of Net Standard is because it provides a uniformity in the .Net ecosystem. In other words if you got two projects of different .Net platform versions(.Net Framework 4.5 and .Net Core 3.1) thanks to .Net Standard these would be able to share a particular class library without any issues.
Now we need to add a Consumer or Receiver. In essence this is where our message would end up going. Or you can process it and send it further where you want to.
Add a new web app project to this solution and select Empty template.
Next thing is to add several Nuget packages
We need to install several MassTransit related packages in addition to Azure Service bus.
Right click on the project and select Manage Nuget Packages. In that tab search for each of these packages and install them to Sender and Consumer projects.
Installed Nugets should look like this.
Next thing we would be adding the code to send and consume message
In the Controller folder of the Sender project add a new controller. I will call it OrdersController. Next add the HttpPost method NewOrder.
Action method should look like this.
[HttpPost()]
public async Task<IActionResult> NewOrder()
{
var sendEndpoint =
await _sendEndpointProvider.GetSendEndpoint(
new Uri("sb://servicebusqueuesnetcore.servicebus.windows.net/new-orders"));
await sendEndpoint.Send(
new Order
{
OrderId = Guid.NewGuid(),
Timestamp = DateTime.UtcNow,
PublicOrderId = _random.Next(1, 999).ToString()
});
return Ok();
}
We also need to add a contract. In the Contracts project add a new class and call it Order. Add all properties that you require. It should look like the example below.
public class Order
{
public Guid OrderId { get; set; }
public string PublicOrderId { get; set; }
public DateTime Timestamp { get; set; }
}
Let’s go back to the Sender project and open the Startup file. In it we will add a service bus, MassTransit, register and define endpoints.
In there we need to edit the ConfigureServices method in order to add required services to the DI container.
This is how it should look like.
public void ConfigureServices(IServiceCollection services)
{
var connectionString =
"Endpoint=sb://servicebusqueuesnetcore.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=blablasharedaccesskey";
var newOrdersQueue = "new-orders";
// create the bus using Azure Service bus
var azureServiceBus = Bus.Factory.CreateUsingAzureServiceBus(busFactoryConfig =>
{
busFactoryConfig.Host(connectionString);
// specify the message of Order object to be sent to a specific queue
busFactoryConfig.Message<Order>(configTopology =>
{
configTopology.SetEntityName(newOrdersQueue);
});
});
services.AddMassTransit
(
config =>
{
config.AddBus(provider => azureServiceBus);
}
);
services.AddSingleton<ISendEndpointProvider>(azureServiceBus);
services.AddSingleton<IBus>(azureServiceBus);
services.AddControllers();
}
Next we need to perform a similar sort of changes in the Consumer project.
Add a new class and call it OrderConsumer. Inside let’s add some code.
public class OrderConsumer
: IConsumer<Order>
{
public Task Consume(ConsumeContext<Order> context)
{
System.Threading.Thread.Sleep(60000);//Wait for one minute
//by returning a completed task service bus removes message from the queue
return Task.CompletedTask;
}
}
Next let’s add HostedService that would start and stop our service bus. It’s very important to start service bus before attempting anything.
Add a new class and call it BusHostedService.
public class BusHostedService
: IHostedService
{
readonly IBusControl _busControl;
public BusHostedService(
IBusControl busControl)
{
_busControl = busControl;
}
public async Task StartAsync(CancellationToken cancellationToken)
{
await _busControl.StartAsync(cancellationToken);
}
public async Task StopAsync(CancellationToken cancellationToken)
{
await _busControl.StopAsync(cancellationToken);
}
}
Next it’s Startup files turn. Similar changes as for Sender Startup. We need to configure services that would be available through Dependency Injection.
Note: Queue name should match between Sender and Consumer.
public void ConfigureServices(IServiceCollection services)
{
var connectionString = "endpoint of your service bus";
var newOrdersQueue = "new-orders"; // need to make sure the queue name is written correctly
services.AddMassTransit(serviceCollectionConfigurator =>
{
serviceCollectionConfigurator.AddConsumer<OrderConsumer>();
//Consumers - Receivers
//Message Creators - Senders
//would normally be in different applications
serviceCollectionConfigurator.AddBus
(registrationContext => Bus.Factory.CreateUsingAzureServiceBus
(configurator =>
{
configurator.Host(connectionString);
/*
For a consumer to receive messages, the consumer must be connected to a receive endpoint.
This is done during bus configuration, particularly within the configuration of a receive endpoint.
https://masstransit-project.com/usage/consumers.html#consumer*/
configurator.ReceiveEndpoint(newOrdersQueue, endpointConfigurator =>
{
endpointConfigurator.ConfigureConsumer<OrderConsumer>(registrationContext);
});
}
)
);
});
//need to always start the bus, so it behaves correctly
services.AddSingleton<IHostedService, BusHostedService>();
}
Next thing we need to create Azure Service Bus
For this we need to go to Azure Portal home. Select Create a Resource.
On the next screen choose Integration then in the right pane menu Service Bus.
When you are creating Service Bus the things to consider are Resource group, Location and Pricing tier.
For Resource groups either create a new or select the existing one. If it’s part of existing then you can manage it with other resources in that group.
Normally Location should be local to you as different regions/locations have different regulations, latency etc. You can either use the same region or pair them. For example your web app uses UK South, so you would consider using UK South instead of East US for your service bus. If you are pairing then consider using UK South and UK West.
However there are no limits as to what region and location to use. As long as it answers your personal or business requirements. You can read more about regions here and here.
If you consider using different regions then check out this Microsoft article.
As for the tier let’s use the Standard pricing tier since it always works and it’s easier to set up.
Note: Azure charges money for this tier. So make sure you use it accordingly and delete the service bus once you stop playing with it.
Once it’s successfully deployed we need to go and retrieve the connection string for our application. Go to the resource and select the service bus namespace.
On your service bus find Settings and select Shared access policies. Then select RootManageSharedAccessKe and copy Primary Connection String.
We need to paste the whole string into the connectionString variable value in the Startup file of Sender and Consumer projects.
Next let’s test what we created
Since we have Sender and Consumer in the same solution we need to set this solution to run multiple projects. Right click on the solution, select radio button for Multiple startup projects. From the dropdown of Action columns select which projects to run.
To simplify things disable SSL for Sender and Consumer projects. On the Sender project right click and select Properties. In the newly opened tab select Debug and find Enable SSL checkbox. Untick it and save it.
Run the solution.
Because we need to hit HttpPost method we would require some sort of API client tool. For this example let’s try out Postman.
Let’s grab the URL of our Sender project and put it into the Postman tab with HttpPost method type selected. We also need to prefix it with controller route.
http://localhost:58852/api/orders
It should return Status 200 OK if everything went ok.
Next let’s check our Azure Service Bus on Azure portal
If you go back to azure portal and open our service bus namespace. Every time you hit the HttpPost method a message will be enqueued. The consumer will process it when it will become available. One of the message models of Azure Service Bus Queues is First in, First out(FIFO).
As you can see our Queue has been added. If you click on it you would be able to see some stats and message count.
Because we suspend current thread for a minute the Consumer will process each new message once this time has passed. So if you hit that HttpPost method several times all these messages will be added to our queue one by one. However the message will start to process after one minute from the moment it was added.
public Task Consume(ConsumeContext<Order> context)
{
System.Threading.Thread.Sleep(60000);//Wait for one minute
//by returning a completed task service bus removes message from the queue
return Task.CompletedTask;
}
By returning a completed task this message will be removed from the queue.
Before we wrap up we need to tidy things up
This is especially the case because we are using the Standard pricing tier for this Service Bus namespace.
There are several options as to how to do it.
You can delete the Service Bus namespace on the page itself.
Another option is to delete a Resource group altogether.
Conclusion
Azure Service Bus is a cloud messaging service. What is appealing is the simplicity of use and set up. What makes it even more easier is the addition of MassTransit, a distributed application framework which is responsible for abstracting the transportation.
By using these tools correctly you are able to achieve service decoupling with no sweat. There are other reasons to use it. To name a few, the application scaling, asynchronous processing and monitoring.
In this post I will explain how to set up logging for Net Core Web App with Serilog and logging messages to be stored in Azure Blob Storage.
What is required
Azure Subscription
Visual Studio
Net Core Web App
Let’s start by creating a Net Core App
Open Visual Studio and choose ASP.NET Core Web Application. Choose an API project template.
At the same time let’s modify the generated controller. Make sure the ILogger interface is injected and there is a readonly property that we can use within controller methods. Add this line of code within HttpGet method.
Right click on the project and select Manage Nuget Packages. In that tab search for Serilog.AspNetCore This package is required to use ILogger interface along with Serilog within your methods. Since Asp Net Core has a built in logging framework this Nuget would allow you to plug-in into logging infrastructure.
Additionally we need Serilog.Sinks.AzureBlobStorage in order to write to the blob storage file. Serilog uses sink concept for various providers. Sink in computing in essence is a class that receives or consumes call/incoming requests from another object.
Next we need to modify Program.cs file
Serilog method requires a connection string from your blob storage to be provided as an argument. Plus we need to add UseSerilog() as an extension to the CreateHostBuilder method. This way we set Serilog as the logging provider.
public static void Main(string[] args)
{
var connectionString = "DefaultEndpointsProtocol=https;AccountName=serilogblobloggingname;AccountKey=blablablakey;EndpointSuffix=core.windows.net";
Log.Logger = new LoggerConfiguration()
.WriteTo.AzureBlobStorage(connectionString)
.CreateLogger();
try
{
CreateHostBuilder(args).Build().Run();
return;
}
catch (Exception ex)
{
Log.Fatal(ex, "Error while creating and building generic host builder object");
return;
}
finally
{
Log.CloseAndFlush();
}
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSerilog()
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
Below is how the Program.cs file should look like. For the connection string we firstly need to create a Storage account in Azure. You can find how to do it below.
Next we need to create blob storage in Azure portal
On the next page click Add, so you would end up on this page. For storage account overview check this article
When filling out the Create Storage Account form pay attention to Performance, Account kind, Replication and Access tier fields as these might affect the performance and running costs of your storage account.
Once it’s created go to Settings for this storage account and click on Access Keys. We need to get the Connection string for our application.
Copy this connection string and go to our application Program.cs. Find the section that has a connectionString variable and replace it with your copied value.
var connectionString = "DefaultEndpointsProtocol=https;AccountName=serilogblobloggingname;AccountKey=blablaConnectionString;EndpointSuffix=core.windows.net";
Log.Logger = new LoggerConfiguration()
.WriteTo.AzureBlobStorage(connectionString)
.CreateLogger();
Next let’s test what we created
Let’s run our application. This application out of the box should hit HttpGet method without changing anything. You can double check this in launchSettings.json or simply go to the web browser after running the app and make sure it has a weatherforecast prefix at the end of the Url.
For me it’s going to be:
http://localhost:64861/weatherforecast
If everything is fine and you got your app running then we need to go to Azure portal to check the logs.
Next let’s check the logs on Azure portal
Go to Azure portal, Storage Accounts and find your storage account. On the overview page click on Containers.
On the next view you should see your container with logs.
Click on it and you should see a log.txt file with an option to Download when you click on it. Download it and check the results.
Awesome job!
Before we wrap up we need to tidy things up
Once you’ve finished playing with your Storage Account it’s better to delete it. You can delete it from it’s main page.
Or if it’s part of the Resource Group then you can delete the whole group.
Conclusion
Setting up logging for Azure Blob Storage with Serilog as easy as to do it just with Microsoft.Extensions.Logging.AzureAppServices Nuget package.
Few modifications to the code, up and running storage account and you are all good.
Of course developers choose third party libraries/platforms like Serilog, NLog and others for different purposes. Normally it’s about the extended features and simplicity of use that these provide. Also the additional support in means of documentation and code examples that you can get.
By doing so we are adding logging to our application that would be monitored with Azure App Service for this application.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
// To override the default set of logging providers added
// by Host.CreateDefaultBuilder, call ClearProviders
logging.ClearProviders();
// add the required logging providers
logging.AddAzureWebAppDiagnostics();
})
.ConfigureServices(serviceCollection =>
serviceCollection //configure Azure Blob Storage
.Configure<AzureBlobLoggerOptions>(options =>
{
//set the last section of log blob name
options.BlobName = "log08062020.txt";
})
)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
This is how the Program.cs file should look like. When we set the BlobName it is only for the last section of the name. Hence the actual name in blob storage would look something like blabla12345_log08062020.txt
Next we need to edit generated controller code
If you have chosen the API template in previous steps then this part should be a one liner code. The controller should already have an ILogger interface injected to the controller constructor. Our task is to add some code in the HttpGet method that will call ILogger method to write a log message.
Next let’s publish our application and create Azure Storage at the same time
For this we need to right click on the project and select Publish. Then you should see the Publish pop out window. There, select Azure.
On the next pop out select Azure App Service (Windows).
On the next window check your Subscription. Then click Create a new Azure App Service.
On the next window add a new Resource group and Hosting plan.
Once you have set up everything click Finish.
This is what you should see next.
We need to add Azure storage as dependency to our application. So click Add.
On the pop out select Azure Storage.
And then Create a storage account.
On the next pop out we need to select the resource group that we have added in previous steps when we create a new Azure App Service. It should be NetCoreAzureBlobLogging20200608171456Resource This way we got both web application and storage in the same group. This helps to keep things organized, you can work with them together, delete them together and so on.
For an Account type select Standard – Locally Redundant Storage. Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option, but is not recommended for applications requiring high availability.
Select our newly created storage instance and click Next.
Add connection string name, leave option to save connection string as Azure App Settings.
Finally you should see something similar. In order to actually publish you need to click the Publish button. Nothing complex but sometimes you can get distracted by so many steps and forget to actually click it. But you are clever and won’t get caught doing this.
Once it is successfully published a web page should open. Alternatively we can use Site URL in the browser of your choice. Or even we can go to Azure portal home, click on Resource Groups, then find your group created in previous steps and select App Service for our web application.
Next step is to enable logging to Azure Blob Storage
On the Azure App Service page for our web application go to the Monitoring section of left the pane menu. In there select App Service logs. Switch On Application Logging (Blob), set Level to Information, Retention Period to 1 and then click on Storage Settings.
In the next window select our blob storage account and create a container.
Let’s manually test what we got
For this you can use any Api client or debugging tool e.g. Postman, Fiddler etc However in our case because it’s a HttpGet method we can use any browser.
Put our Azure App Service web application Url into the address bar of a web browser, prefix it with /WeatherForecast as the name of our controller and hit Enter. Like it is shown below.
Next we need to go to the Blob Storage container section of Azure portal. You can find it either under the same Resource Group or you can access it through Azure portal home page.
From there click on Containers
Then you need to go all the way to our Log file location.
There should be a Download button.
Once the download has completed you can inspect the Log text file and see our logged message from the controller.
Before we wrap up we need to tidy things up
It is a good habit to stop or delete your created service once you finish playing with them. This way you would prevent any service charges. If you go back to the Resource Group section of Azure portal and select your one. On the details page of that Resource you can delete it together with all the services under it.
Conclusion
As you have noticed, for a basic setup of logging you really need to do a few steps. And again it is mostly downloading a particular Nuget package, adding a few lines of code to Program or Startup files, creating an Azure storage account and there you go!
If you want something tangible that can be applied to real world applications I recommend to looking into Structured logging.
In this post I will explain how to set up logging so that output messages show in the Console. There will be two options of how to do it. One in Console Application and the other with Web Application that is run as a project.
What is required
Visual Studio
Net Core Console App
Net Core Web App
Let’s start by creating a Net Core Web App
Open Visual Studio and choose ASP.NET Core Web Application.
Add Project name and location for this application.
Then select API for project template.
Next step is to edit Program.cs file
We need to edit a code that builds a host object. In essence it is a place where everything gets set up i.e. DI, Logging, Configuration, IHostedService implementations.
For this tutorial we are interested in Logging only.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
// override the default set of logging providers added by Host.CreateDefaultBuilder
logging.ClearProviders();
// add the required logging providers
logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
Below is how your Program.cs should look like.
Now let’s see what we made here
Before running the application for the first time we need to edit Debug Profile. First set it to our project name NetCoreLoggingConsole and then set it to launch as Project.
Now if we Run/Start Debugging the application we should see a Console Application window to appear. On startup it hits the Get method of the controller and as a result logger logs Information Log Level message. It then appears in the Console Application window. To note you need to make sure you got the ILogger interface into the constructor of the controller.
More closer image.
We can achieve the same result in the Console Application
Let’s add a new project Console Application to this solution. Right Click on the Solution name.
Select Console App (.Net Core)
Let’s configure necessary bits to add Logging to this application. For that we need to install the Microsoft.Extensions.Logging.Console nuget package.
After the nuget package is installed, let’s open Program.cs for this new project and add some code inside the Main method.
//net core default DI
var serviceProvider = new ServiceCollection()
.AddLogging(builder => builder
.AddConsole()
.SetMinimumLevel(LogLevel.Debug))
.BuildServiceProvider();
//create logger instance
var logger = serviceProvider
.GetService<ILoggerFactory>()
.CreateLogger<Program>();
logger.LogInformation("CONSOLE -> Main method");
Console.WriteLine("Hello World!");
Console.ReadLine();
This is how it should look like once everything is in place.
Next step is to set to run multiple projects for this solution.
There you go a closer look.
As a result we should see two Console Application windows. If you remember one for Web Application that is run as Project and one for Console Application.
And closer look.
Conclusion
Logging to Console can be handy. Normally you would develop small programs for practice purposes. However with Net Code you can implement more advanced stuff like add DI, host object, configuration etc
As a result you can have a Console Application based microservice to do a particular job. It is straightforward to set up and you can quickly develop applications following this pattern.
What is great is that you can add just necessary stuff and develop around it.