An evening on Thu Aug 8th, 2019 well spent with people who appreciate the knowledge and aspire for learning. Thanks again City University of Seattle Clark Jason Ngo and Sam Chung for having me to talk about “Microservices Architecture”

No alt text provided for this image

No alt text provided for this image

No Rehearsal, No Re-Take. Past weekend, I recorded a podcast on “Microservices Architecture” with host Mario Gerard. The podcast is live now, it has 3 parts with greater details and insights #microservices #softwarearchitecture #mvpbuzz

We talk about:- TPM Podcast with Vidya Vrat Agarwal on Microservices.

  1. What is a microservice? TPM Podcast with Vidya Vrat Agarwal on Microservices.
  2. Advantages of microservices
  3. Monolithic applications vs microservices?
  4. Why is there is renewed uptick in organizations using Microservices?
  5. Pain points of microservices
  6. How do you break down monolithic applications into microservices
  7. Microservices Advanced Topics

Part -I – https://www.mariogerard.com/tpm-podcast-vidya-agarwal-on-microservices-part-i/
Part -II – https://www.mariogerard.com/tpm-podcast-vidya-agarwal-on-microservices-part-ii/
Part -III – https://www.mariogerard.com/tpm-podcast-vidya-agarwal-on-microservices-part-iii/

Microservice Architecture

May 13th, 2019 | Posted by Vidya Vrat in .NET | Architecture - (0 Comments)

Abstract

The microservices architecture style is an evolution of the Monolith SOA (Service Oriented Architecture) architecture style. The difference between the SOA and microservice approach is how these are being developed and operationalized. With every new technology addition, our responsibility also increases to be abreast of pros-and-cons the new member has, and the pain points it is designed to solve.

Monolith

Think of any MVC pattern-based API codebase, where all your controllers and POJOs (Plain Old Java Objects) or POCOs (Plain Old C# Objects) were developed, build and deployed as a single unit, and for almost all the times a single data store was used for the enterprise. I.e. One database is housing all the tables for various responsibilities, for example, Customer, Payment, Order, Inventory, Shipping, Billing, etc. as shown in the logical architecture diagram below. I.e. all the various responsibilities are together.

Monolith architecture

  Monolith Pros:

• Less Crosscutting Concerns: Being monolith and having all the responsibilities together, single or few implementations can cover all the major cross-cutting concerns such as security, logging.

• Less Operational Overhead: Having one large monolith application means there’s only one application you need to set up logging, monitoring, testing for. It’s also generally less complex to deploy, scale, secure and operationalize.

• Performance: There can also be performance advantages since shared-memory access is faster than inter-process communication (IPC).

Monolith Cons:

• Tightly Coupled: Monolithic app services tend to get tightly coupled and entangled as the application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability.

• Harder to Understand: Due to many dependencies, monolithic architecture easily become harder to understand.

• Deploy all or none: When a new change needs to be pushed, all the services need to be deployed. I.e. if something changed in OrderController and you want to proceed with deployment, all other controllers and code will be deployed unwantedly.

• Scale all or none: Scale-up or Scale-out, it’s for entire functionality.

Microservice

Gartner defines a “Microservice as a small autonomoustightly scopedloosely coupledstrongly encapsulatedindependently deployable, and independently scalable application component. “ Microservice has a key role to play in distributed system architecture, and it has brought a fresh perspective.

Unlike monolith, a microservice strictly follows the Single Responsibility Principle (SRP) due to being tightly scoped around the business capability/domain for example Payment. Think of an MVC pattern-based API codebase where a controller and POJOs (Plain Old Java Objects) or POCOs (Plain Old C# Objects) were developed, build and deployed for just one single responsibility i.e. business capability. This microservice architecture will then lead to having many such projects and each business capability having its own database.

No alt text provided for this image

Microservice Pros:

  • Easier Deployments: Monolith was deploying all or none. Microservice is a small service, the dev team has complete control on what needs to be deployed and leave another feature’s code untouched. I.e. if changes made in Payment service, only payment can be deployed. This was not possible with a monolith.
  • Scalability: Being a small tightly scoped service design, it gives freedom and flexibility to scale whichever service you want. For example, if needed only Payment can be scaled up/down/out.
  • Easier maintainability: Each microservice service is autonomous and so being small service it’s much easier to maintain than a monolith.
  • Problem isolation: Each service has its own codebase and deployed in its own space. Hence, any problem with self or other services remains completely isolated.
  • Single Responsibility: Single Responsibility Principle is at the core of microservice design. This drives all other goodness microservice architecture offers.
  • Separation of Concern: Microservices architecture leads towards building tightly scoped services with distinct features having zero overlaps with other functions. 
  • Deep domain knowledge: Microservice encourages the product mindset and leads towards establishing deep domain knowledge with “you build it, you run it” mantra.

Microservice Cons:

  • Cultural Changes: It requires a cultural shift and organizational alignment to adapt microservice architecture, and the organization will require a mature agile and DevOps culture. With a microservices-based application, the service team needs to be enabled to manage the entire lifecycle of a service. Read my article on STOSA (Single Team Owned Service Architecture)
  • More Expensive: Microservice architecture leads to growing costs, as there are multiple services. Services will need to communicate with each other, resulting in a lot of remote calls. These remote calls result in higher costs associated with network latency and processing than with traditional monolith architecture.
  • Complexity: Microservices architecture by nature has increased complexity over a monolith architecture. The complexity of a microservices-based application is directly correlated with the number of services involved, and a number of databases.
  • Less Productivity: Microservice architecture is complex. Increased complexity impacts productivity and it starts falling. The Microservice development team usually requires more time to collaborate, test, deploy and operationalize the service which certainly requires more time.

Summary

Microservice and monolith architecture, both have their pros and cons. Microservice architecture is not a silver bullet, but teams that aspire to be digitally transformed and writing services with a product mindset, microservice architecture is certainly worth consideration. 

Single Team Owned Service Architecture (STOSA) is a guiding principle for large organizations that have many development teams that build and own microservices to cater to one or more enterprise-wide application(s).

A Microservice must be owned and maintained by a single development team within your organization.

To be a true STOSA based team/organization you must meet the following criteria:

  • Have application(s) which is/are developed using a Service (monolith/microservice) architecture.
  • Have multiple development teams that are responsible for developing and supporting the application. 
  • Each service should be assigned to only one development team.
  • A development team may own more than one service.
  • Teams are responsible for all aspects of managing the service, from service architecture and design, through development, testing, deployment, monitoring, and incident resolution.

Typically, a STOSA-based organization follow the above-mentioned criteria and each service team is of a reasonable size (typically three to seven developers also known as a two-pizza team). If a team is too small, it cannot manage a service effectively. If it’s too large, it becomes cumbersome to manage the team.

Non-STOSA-based organization

Above shown image represents a non-STOSA organization, and you’ll notice a couple things. First, Service “Shipping” does not have an owner. Yet, services such as Order and Billing are owned and maintained by more than one team (Team1, Team2, and Team3). There is no clear ownership. If you need something done in Order and/or Billing service, it’s not clear which team is responsible. If one of those services has a problem, who responds? Who do you contact for Shipping service? This lack of clear ownership and responsibility makes managing a complex application even more complicated.

STOSA based organization

The ovals represent development teams that own the enclosed services for example Team1 owns Payment, and team4 owns Shipping and Address. All the 6 services are managed by four teams. You’ll notice that each service is managed by a single team, but several teams may manage more than one service for example, Team3 and Team4. Main point is that every service has an owner, and no service has more than one owner. Clear ownership for every aspect of the application exists. For any part of the application, you can clearly determine who is responsible and who to contact for concerns, clarifications, questions, issues, or even changes.

Advantages of a STOSA Application and Organization

Clear and definitive ownership of a service is the key for the success of any application or the organization. As applications grow, they grow in complexity. A STOSA-based application can grow larger and be managed by a larger development team than a non-STOSA based application. As such, it can scale much larger while still maintaining well, documented, supportable interfaces within the established SLAs, and it’s all due to well defined ownership. 

A STOSA-based organization can handle larger and more complex applications than a non-STOSA organization. This is because STOSA shares the complexity of a system across multiple development teams effectively while maintaining clear ownership and lines of responsibility.

Seattle Code Camp 2018

September 19th, 2018 | Posted by Vidya Vrat in Architecture | Community - (0 Comments)

This was 3rd years in a row, and on Saturday, September 15 I gave two sessions at Seattle Code Camp 2018

Session slides are available at my GitHub 

The first session was on “How to become a Software Architect”. Many aspirants and practicing architects attended my session and asked various questions on architect career path.

The second session was on “Microservices: what, why and how?”. This session also went very well, and I ran out of time to address all the questions related to many individual’s architecture designs and how to solve a problem they are facing. Hence, few topics were discussed after the session in the lobby.

What some of the participants said

I look forward to Seattle Code Camp 2019.

What is Live Unit Testing

Live Unit Testing is a brand-new technology, made available in Visual Studio 2017 version 15.3 or above. Live unit testing enables the IDE to execute unit tests automatically in real time without cod being built and as you make changes to the code.

Which frameworks and tooling support Live Unit Testing?

Live Unit Testing is available for C# and Visual Basic projects using MSTest, NUnit, or xUnit Test frameworks that target the .NET Core or .NET Framework in the Enterprise Edition of Visual Studio 2017.

Problem Statement

Unit Tests, and code have a very close relationship, and both depend heavily on each other. I.e. a change in source code must impact the unit test(s) code for what was being tested. Similarly, a change in a unit test, must impact the code coverage of the source code, which is being exercised via unit tests.

In the light of above mentioned problem statement, there is no efficient and developer friendly solution which a traditional unit tests can solve. Moreover, validating code changes through unit tests may quickly turn out to be a tedious task, as you must run all the unit tests from Test Explore after every code change.

Benefits of Live Unit Testing

It empower the developers to refactor and change their code with greater confidence. Live Unit Testing graphically depicts code coverage in real time and provides a quick visual view of the code coverage, and which code statements are passing in unit tests. As per the name “Live Unit Test”, you don’t have to build the code, and run the tests again via Test Explorer to validate the changes. I.e. soon after code changes are made, unit tests will be automatically executed to reflect the impact (pass, fail).

Quick look at a traditional Unit Test

As explained in my previous article Inside Out : TDD using C# there is always a system under test or code which needs to be exercised via unit tests. This code is available at my GitHub, and this code will work just fine for Unit Test, and Live Unit Test (if you have the required Visual Studio 2017) purposes.

The code below, shows unit tests written for Account.cs class’s IsAccountActive() function.

What is being tested?

As shown in the image above, Account.cs class’s IsAccountActive() method is being exercised. I.e. Account.cs is our System Under Test.

Let’s begin with Red, Refactor, Green

As this code is showcasing Test Driven Development, I.e. write a failing test, and then write code to make that test pass, and so on.

First, let’s write/refactor code to make TestAccountStatus_Active_Success() test case pass. To do so, copy the code from TestCase#1 code snippet from TextFile1.txt (included in the UnitTestBankApplication at my github

Account.cs code when TestAccountStatus_Active_Success() was failing:

Account.cs code changes to make TestAccountStatus_Active_Success() pass:

Now, if you will run the Tests again, by clicking “Run All” option in the Test Explorer, then you shall observe that 1/3 Test case is passing, as shown in the image below.

But what exactly is the problem here?

Unit Test code or source code which is being tested doesn’t explicitly tell which code lines are being covered, not covered, or passed via unit tests. I.e. whenever developer will change the code, test cases need to be run over and over to identify the impact, and that is without any visual clue in the code files. Hence, with traditional unit testing, a developer can’t clearly tell which code statements are covered, or not covered, through passed or failing unit tests.

Turn-on Live Unit Testing

Navigate to the Test menu, expand Live Unit Testing, and then click on Start, as shown in the image below.

If you don’t see this option then you might not have the appropriate version of Visual Studio 2017 Enterprise Edition, which offers the Live Unit Testing feature. However, if you have Visual Studio 2017 enterprise edition, and you don’t see this option, then you can add Live Unit Testing tool from the setup menu, and selecting it from the individual components tab.

Live Unit Testing in action

As soon as you have started the Live Unit Testing, your IDE will instantly start tracking the code changes in the background, as shown by X, , and symbols on the left of code statements in the IDE, as shown in the image below.

As of now, there is only one test passing, let’s take the Test Case#2 code from TestFile1.txt

and put it in Account.cs class’s IsAccountActive() function, and soon after putting the code in there, without building the code explicitly, you shall observe the changes, and notice that now two tests are passing. Alongside, you shall observe the changes to the code line symbols X, , and on the left.

Now, you may want to work on passing the third unit test, and to make that happen as per TDD guidelines, you code should refactor the code logic to accomplish that. Put the Test Case#3 code from TestFile1.txt, into Account.cs class’s IsAccountActive() function

As soon as the code is refactored, again without any explicit build and running tests from Test Explorer, Live Unit Testing observed the code changes and ran the tests. As shown in the image below, now all three unit tests are passing, and all code lines in Account.cs class’s IsAccountActive() shows symbol on their left. The  symbol means that all code statements in this function or block are covered by the Unit Tests.

Now, what’s next?

As you can see in the image above towards the bottom-half, ─ symbol in front of other code blocks, which means that these code blocks are covered by 0 test. I.e. no unit test case(s) exist for these code blocks. You may want to take it to next level by taking code from my GitHub repo, and expand it.

Pause or Stop Live Unit Testing

Once Live Unit Testing has been started, now it can be either paused or stopped. You can navigate to the Test menu, select Live Unit Testing, and then choose either Pause or Stop, as shown in the image below. Based on the choice you make; Live Unit Testing will be either paused or stopped completely.

Excluding individual Unit Tests from Live Unit Testing

Ideally, all the tests in all the projects will be covered once Live Unit Testing has been started.

But, you can use the following attributes to specify in the source code that you want to exclude targeted test methods from Live Unit Testing:

  • For MSTest: [TestCategory(“SkipWhenLiveUnitTesting”)]
  • For xUnit: [Trait(“Category”, “SkipWhenLiveUnitTesting”)]
  • For NUnit: [Category(“SkipWhenLiveUnitTesting”)]

Include/Exclude Unit Test Projects or Test class files

Live Unit Testing is IDE level, and it works for all test projects and all class files which are part of a test project. To include/exclude either a test project or a test class file, you can use the context menu and choose Live Unit Testing, Include or Exclude options. The image below shows these options on the UnitTestBankApplication Test Project, and you may want to try it on test class file as well.

Wrapping up

Live Unit Testing appears to be a milestone in fast-paced, technology-heavy software development field. This certainly doesn’t take away the importance of knowing Unit Test fundamentals, rather, it enforces the TDD (Test Driven Development) mindset in an effective, and productive manner to the developer community, by showing visual clues in the IDE code window, and running the tests while a developer is refactoring the code.

I THANK all my readers and followers in the tech community. My community contributions via www.C-SharpCorner.com reached 6 million read count.  https://www.c-sharpcorner.com/members/vidya-vrat-agarwal

Recap

C-sharpcorner Redmond chapter has been organizing a dinner event during the week of Microsoft MVP Summit. This year it was scheduled on March 6th 2018 for all the C-shaprcorner MVPs, invited Microsoft MVPs and few other guests.

67 C# Corner MVPs across the globe registered for this event https://www.c-sharpcorner.com/events/c-sharp-corner-mvp-dinner-microsoft-mvp-summit-2018 and many e-mailed me that they will attend.

This year, we had over 125 guests including some estemmed guests linke Scott Hanselmen (doesn’t require an introduction), Mads Torgerson (C# language PM at Microsoft), Kerry Herger (CPM Manager), and Joe Darko (CPM USA).

Moments

Moksha Indian Restaurant – Bellevue WA, USA

Moksha Indian Restaurant – Bellevue WA, USA

C# Corner and Microsoft MVPs with Scott Hanselman

Magnus, Mahesh, and Mads Torgerson (C# Language PM, Microsoft)

 

Objective

  • Setup git repository in VSTS
  • Build Continuous Integration (CI)/Continuous Deployment (CD) pipeline from scratch using VSTS
  • Enable CI
  • Create free web app service on Azure
  • Trigger CD after successful CI
  • End-to-end flow execution

Abstract

Continuous Integration (CI)/Continuous Deployment (CD) are widely used terms in the DevOps world. If you are new or don’t fully understand the buzzword DevOps then read my article DevOps, What, Why & How. Continuous Integration (CI) and Continuous Delivery(CD) enables and empowers any software delivery team in an organization to continuously deliver value to their end users.

Prerequisites

The application which you want to setup CI/CD pipeline for, that project code must be available in a code repository in VSTS (Visual Studio Team Services) read my article Beginning with git using VSTS can be helpful to set up a git repo from scratch using VSTS. Also, an Azure portal account is a must.

Creating a Build Definition

After successful Project and code setup in the git, you will see that your VSTS account will show code repository and a project like as shown below. Here, repository named TestProjects has an ASP.NET MVC application named WebApplication1.

Click on “Set up build” and begin with selecting a template “ASP.NET” as shown in the image below, click Apply. You may want to choose a template which is suitable for your application.

Setting the build tasks

If not already open then open the Tasks tab. By default, build definition name will be <ProjectName>-<BuildTemplateName>-CI (you can change this to something else if you wish to) and Agent queue will be empty, which needs to be set to Hosted (in case of .NET) or Hosted 2017 (in case of .NET Core)  as shown under Process blade in image below.

By default, build is being set up at the master branch on the project you initiated “set up build” from, which was TestProjetcs. You can verify this and change settings if needed using the “Get sources” blade as shown in the image below.

Build Template comes pre-configured with most of the tasks and hence you can review other tasks, but no changed would be required for successful execution from this step onwards (verified for MVC application), however, in your case, you may have to tweak some settings.

Queuing a CI build

To produce deployable artifacts, you need to submit a build of your code. To do so, click on Save & queue or if build definition is saved already then click on Queue which will ask for confirmation to kick-off a build as shown in the image below.

Click on Queue to initiate the process and you will notice that your screen will show that a Build #yyyymmdd.n have been queued.

To track the status of a build you can click on the build number link (as shown in the image above) and if everything was setup correctly then you will see “Build succeeded” as shown in the image below that all the build steps executed successfully.  To explore more about each build step, you can click on a step link shown in the left-pane of the window.

Exploring the Build Artifacts

CI build produces deployable artifacts and those can be seen via clicking on “Artifacts” link shown above Build details.

Setting up the Continuous Integration (CI)

Continuous Integration (CI) is when the code is being built on each check-in and that can be deployed to an environment. To set up this process, navigate to the build definition (click on Edit) and open the “Triggers” tab. By default, continuous integration is Disabled as shown in the image below.

Click on “Enable continuous integration” to enable the CI for TestProjetcs and it will include master branch as the default setting.

Click “Save” to keep the changes to the build definition.  Do not choose “Save and queue”

Continuous Integration (CI) in Action

Ensure that VSTS is open and you have Builds page open. Now switch to the Visual Studio and update the code of WebApplication1 under TestProjects repository

Click on “Commit All” to check-in the changes and push the code to master. Soon after that under the Builds in VSTS, you will notice that another build is kicked off and shows “in progress” status which was triggered by the commit “Title update in About.cshtml”.

After successful build process, you will see “succeeded” status

Setting up the Azure Web App

Before creating a Release definition, an environment needs to be set up for deployment. An azure web app is a perfect choice to host the MVC application.

Login to the azure portal and create a web app “WebApplication1-dev” for dev environment and so other flavors (qa, uat, staging, and production etc.) can be created as needed.

 

By default, the Web app is created as an S1 (small) pricing tier, but you can create your own service plan and select Free pricing tier. Refer to my article Free Web App Hosting on Azure

After successful Web app creation, it will be available at https://webapplication1-dev.azurewebsites.net/

Setting up the Continuous Delivery (CD)

Continuous Delivery (CD) is a software engineering approach in which teams produce deployable software which can be reliably released to an environment (dev, qa, uat, staging, and production) at any time.

To set up CD click on Build and Releases, then select Releases. Click on “New definition”.

A “New Release Definition” will be created with the template for release as shown in the image below. Change the name of the release to “TestProjects-ASP.NET-CD”, then click on Artifacts and select TestProjects-ASP.NET-CI.  Because, CI build artifacts will be deployed using Release dominion as CD process.

Next, it’s time to setup the Environment, rename it to dev and set values under “Tasks” tab as appropriate.

Creating a Release

After a Release definition has been set up, it’s time to deploy the CI build using the created release, this can be easily achieved by clicking on “Create release” link on the top-right corner.

“Create release” will summarize the setup of showing dev environment and created CI builds to choose from. By default, CI build will automatically kick off the release to the dev environment.

Select the latest build from the list of successful build versions and then click “Create” and notice that “Release-1” has been created.

Click on the link “Release-1” to check the status which will be shown as “IN PROGRESS”

After few seconds Deployment status will be updated to “SUCCEEDED”

Now, if you access the  https://webapplication1-dev.azurewebsites.net.WebApplication1-dev.azurewebsites.net then you will see that web application has been successfully deployed to Azure web app.

Setting up end-to-end automated CI/CD pipeline

In the Release definition click on Artifacts trigger icon (lightning symbol) and enable the continuous deployment trigger and select the branch.

Now, go to Visual Studio and make some changes to the About.cshtml and check-in the code to master branch of WebApplication1 project. Which will then kickoff a CI Build.

Successful CI build will then kick-off the Release for CD to the dev environment.

Upon successful execution of Release, the website will be deployed as shown below (see an updated message on the about.cshtml page).

This point onwards, whenever any code changes will be made using Visual Studio, then a CI build will spin and upon successful build execution, a Release (CD) will be created to deploy the web app.

Summary

CI/CD is the most widely used t the rm in industry today and helps in delivering continuous value delivery to the end users. VSTS and Azure offers a lot of great and powerful features to build CI/CD pipeline. This article has shown the process of building CI/CD pipeline from scratch and how to enable various triggers for CI/CD automation during this process.