Course Content
Evolution to Microservices
Throughout our first chapter, we will gain a shared understanding of what a Microservice is, and cover some of the main benefits as well as drawbacks. Finally, we’ll explore some of the situations where we would want to choose a Microservices architecture compared to a monolithic application.
0/6
Designing Our First Microservice
Now that we have a shared understanding of microservices, including the benefits and drawbacks, let’s dive into creating our first microservice. We’ll start by setting up our environment, and covering the domain we’ll be working in throughout the rest of the book. From there, we’ll scaffold our first microservice and implement the business logic to allow other microservices to communicate with it.
0/7
Communication Between Microservices in .NET
So far, we have implemented a Basket microservice as part of our e-commerce application. However, this is only one component of an e-commerce application, so we need to introduce more functionality, which we achieve by creating new microservices. In this chapter, we will cover communication methods between microservices, introduce our second microservice, and implement communication between it and our Basket microservice.
0/7
Cross-Cutting Concerns
In our previous chapter, we introduced some duplicated code around the connections for RabbitMQ. Duplicated code isn’t the end of the world, but as developers, we must ask ourselves whether code can be re-used. Throughout this chapter, we are going to discuss duplication of code in the realm of microservices, as well as some common concerns that affect all microservices.
0/10
Data Ownership
Our e-commerce application is starting to take shape. We have a Basket and Order microservice, along with events that allow for asynchronous communication between the two. However, we currently use an in-memory store for both microservices. This is not reflective of a production microservices environment. Whenever we write applications, be it a monolith or a microservices-based architecture, data is a core component. Without data, most of our applications wouldn’t be very useful. So, throughout this chapter, we are going to understand data ownership in the context of microservices, exploring the replication of data to build more efficient and robust applications.
0/9
Extend Basket Microservice Functionality
Now that we’ve introduced a new service, the Product microservice, to our E-Commerce application, we can extend the functionality of our Basket microservice. We want to be able to display product price information in our baskets, so we need to consume the new ProductPriceUpdatedEvent we introduced in the previous chapter. At the same time, we can introduce a persistent data store for our basket microservice, and tick off another part of our overall architecture.
0/7
Testing Distributed Applications
At this stage, our E-Commerce application is starting to come together, with quite a few moving pieces. However, any time we introduce a new service or change some functionality, we need to manually run tests via Postman or curl, which isn’t very efficient. Furthermore, we cannot easily automate this type of testing, so whenever we get to a stage of continuously deploying our microservices, we’ll be slowed down by this manual testing. As developers, testing is something we should be very comfortable with doing and implementing. Throughout this chapter, we’ll briefly cover the types of tests we can write, focusing on our microservices, as well as implementing various levels of tests to ensure we can continuously add new microservices and functionality to our E-Commerce application.
0/4
Integration Testing With Order Microservice
So far, we’ve covered the base of the testing pyramid with unit tests in our Basket microservice. The next level we need to cover is integration testing, which we’ll pick up with our Order microservice. It is worth noting that we previously asked you to implement a data store for the Order microservice, so things may differ slightly. Of course, the source code is available with an SQL implementation, so feel free to follow along using that configuration. We’ve already covered the scope of integration testing in the previous chapter, so let’s waste no time getting into the code!
0/5
Application Observability
Throughout our journey of building an E-Commerce application using microservices, we’ve composed quite a complex system. So far, we’ve got 3 separate microservices, each with its own data store. Furthermore, we’ve got an external message broker (RabbitMQ) that allows us to communicate asynchronously between microservices. We’ve been able to test each microservice individually, both manually via Postman or curl and in an automated fashion with unit and integration tests. All of these processes are great to help us during local development and provide confidence whenever writing new features, but what about whenever our application is in production? Right now, if we deployed our application and allowed customers to use our E-Commerce platform, we’d have no insight into the performance of our application. We’d also have no idea how data flows through our application beyond what we have tested ourselves. This is where observability comes into play.
0/7
Monitoring Microservices With Metrics
In the previous chapter, we started considering how to monitor our microservices whenever deployed in a production environment. The first telemetry component we covered was tracing, which gives us contextual information for our microservices and external infrastructure. This information is useful when we need to dive deep into a problem, but it doesn’t provide an easy-to-understand overview of our service’s performance. This is where metrics come into play, which we’ve already gained an understanding of, so let’s waste no time implementing metrics in our Order microservice.
0/3
Building Resilient Microservices
So far, we’ve designed a system that provides us confidence when releasing new features thanks to testing. We’ve also gained insight into how our application performs when deployed with the help of OpenTelemetry tracing and metrics. With this last component, we’re likely to see recurring failures between microservices and our external infrastructure such as SQL or RabbitMQ.
0/6
Securing Microservices
Throughout the development of our microservices, every request we executed was unauthenticated. However, this needs to be revised for a production-level E-Commerce application. Although we can allow anonymous access to create baskets and add products to them, we cannot allow everyone access to create products or update product pricing. Therefore, we need a mechanism to secure certain endpoints, which we’ll achieve by introducing two new services to our system. Let’s start with an understanding of the different components of security.
0/6
Microservice Deployment
We’re now at a stage where we have a pretty sophisticated system, with many components, tests, and features. The next logical step for any application, microservice-based or not, is to tackle deployment. We need a solution to help us with microservice deployment complexities. But first, let’s briefly touch on the differences between monolithic and microservice deployments.
0/5
Microservices in .NET

With a basic understanding of what a microservice is and how we can combine microservices to provide a functional system, we’ve started to touch on the benefits that implementing microservices gives us, such as:

  • Easily maintainable by a small number of developers
  • Enables continuous delivery
  • Individually scalable.

Let’s dive a bit deeper into these benefits.

Easy Maintainability of Microservices

There are a couple of factors that make microservices easily maintainable. First and foremost, because they are singularly focused, they have very specific reasons to change. External factors are vastly less likely to cause us to refactor microservices, given they are implemented correctly. This allows us, as developers, to adhere to one of the fundamental principles of software development – the Single Responsibility Principle.

Furthermore, microservices own their data store and don’t allow other services to interact directly with their data. In addition, a microservice codebase is generally small, which means we as developers can more easily grasp the domain a microservice is focusing on. This provides much more confidence when it comes to refactoring microservices or introducing new functionality.

Due to this singular focus and easy comprehension of what a microservice does, usually, a single team can own and maintain a couple of microservices. This provides the team with ownership of these microservices and empowers them to quickly diagnose and resolve issues and bugs as they arise in production, rolling out fixes in a short space of time, which is the next benefit we’ll cover.

Continuously Delivering Value

Since a single microservice has a small blast radius and focuses on a specific domain, it enhances the speed at which developers can deliver new code for the service. On top of this, we can more quickly understand bugs in the system and resolve them. Having small services enables us to write more automated testing for our service.

We may still have external dependencies, but these should be fewer than in a monolithic codebase and much easier to mock. Having well-tested services further enables us to deliver new features and fixes to our users, as we have the confidence that the code we write isn’t going to introduce new regressions in the system. One of the greatest enablers for providing continuous delivery in a microservice is that because we have well-defined and well-tested boundaries, we can deploy new, backward-compatible versions of our service at any time. We’ll touch on backward compatibility later on, but let’s understand a bit better how exactly microservices enable continuous delivery.

Let’s take a monolithic e-commerce application for example, where we have all the code and functionality in a single codebase. If we want to roll out a new feature to our payment service, such as enabling users to pay via the latest cryptocurrency, we need to deploy our whole application. But we’ve only introduced new code in the part of our application that handles payment:

Our basket, product, and order modules, although unchanged, are restarted as we have everything in a single deployable. Now, if we instead designed our e-commerce application as a set of independent microservices, we could trivially introduce this new functionality to our payment microservice and deploy it separately to all our other microservices. Our basket, product, and order microservices don’t care, or need to know, about the ability to pay with a new cryptocurrency, so deploying this new functionality to our payment microservice does not affect them. So they can continue functioning and serving users while our payment microservice is updated:

A single team owns a microservice and can be easily deployed independently due to thorough testing. Because of this, the velocity of delivering value to our users increases, potentially enabling multiple deployments a day. We won’t cover all the intricacies of actually achieving multiple deployments in a day, as there are a lot of factors that come into play, but we will touch on some of the most important later on, such as being resilient and asynchronous microservices.

Highly Scalable and Available

When we talk about scalability, we mean the ability to meet the demands of our system and users. Generally, when we build applications that serve a non-trivial amount of users, we inevitably hit bottlenecks somewhere in the system. This can be due to a large number of factors, such as the computing power available to the system. With a monolithic system, it can be very difficult to pinpoint where this bottleneck lives, as our whole application shares the same computing power.

Furthermore, to address the bottleneck, we need to scale up (provide more CPU/RAM) or scale out (introduce more instances of our application) the application as a whole, which can be costly. On the flipside, in a microservices-based system, given we have the correct monitoring in place, we can much more easily diagnose where the bottleneck lives, usually being able to pinpoint the precise microservice causing the issues. This is hugely beneficial, as we have a bunch of options to address this bottleneck.

We can simply scale up or scale out the single microservice to fix our bottleneck. Because each of our microservices has its computing power, we can increase the compute for the single microservice we need to, without needing to touch any of our other microservices. This greatly improves efficiency and cost. Furthermore, because we can now pinpoint where the bottleneck is if it is due to poorly optimized code, we have a much easier time understanding the issue and going about resolving it, while in a monolithic system, this would be much more difficult.

0% Complete