net5microservices
- Change Profile and Port number as per requirement
docker pull mongo docker run -d -p 27017:27017 --name shopping-mongo mongo docker logs -f shopping-mongo docker exec -it shopping-mongo /bin/bash
docker start # if already running
- mongo
- show databases
- use CatalogDB #Create new DB
- db.createCollection('Products')
docker run -d -p 27017:27017 --name catalogdb mongo docker exec -it catalogdb /bin/bash
Use mongoclient image - GUI for mongo
- add DB details to appSettings.json
- Add ICatalogContext
- Implement above interface
-
Visual Studio -> Project -> Right Click -> Add -> Docker Orchestration Support -> Docker Compose
Automatically runs
-
Add mongoDB to docker-compose file
docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d
- Rebuild if change in any existing code
docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d --build
docker-compose -f docker-compose.yml -f docker-compose.override.yml down
- REDIS -> Remote Dictionary Server
- Open Source NoSQL Database
docker pull redis
docker run -d -p 6379:6379 --name aspnetrun-redis redis
docker exec -it aspnetrun-redis /bin/bash
redis-cli
set key1 value1
get key1
- Add redis to docker-compose file. Add alpine version [light weight]
-
docker pull postgres
-
Admin Portal for postgres management using pgAdmin
-
Connect to postgres db using pgadmin
-
create table using pgadmin portal
-
pgAdmin -> Add sever -> Name: DiscountServer, Host: discountdb, Post: 5432, cred: admin/admin1234
-
NUGET: npgsql, Dapper (ORM)
-
Migrate database on application startup - create DB/Add table/Add seeding data Modify Program.cs
gRPC Communication
-
with PostGreSQL
-
gRPS require HTTP/2 protocol, high performance, synchronous communication
-
guthub -> aspnetrun -> run-aspnet-grpc
-
best for communication between backend and internal APIs
-
Add Asp.NET Core gRPc project to solution
-
dotnet new grpc -o Discount.Grpc
-
dotnet build
-
dotnet run
-
Add AutoMapper.Extensions.Microsft.DependancyInjection
- Consume Discount.gRPC in BAsket.API
- Add Connected Service (gRPC)
- Add proto file
- Create Client class
-
SQL Server, REST Api, Entity Framework core code first
-
Clean Architecture
-
DDD [Domain Driver Architecture], CQRS [seperate queries and commands], SOLID
-
MediatR, FluentValidation, Automapper
-
Queries & ViewModels [reads], Commands & Domain-Model [Updates]
- S : Single Responsibility principle
- O : Open [for extensibility] - Close [for modification] principle
- L : Liskov substituion principle ???
- I : Interface segregation principle
- D : Dependancy Inversion principle
- Low coupling, high cohesion
- Read and Write DB are different, usually
- Different READ and WRITE models
- system will be eventual consistent
- Asynchronus processes
- No transactional dependancy
- Accumulating events in system
- Generate state from events
-
Layers : Domain, Application, API, Infrastructure
-
Domain layer: No dependancy
-
Application layer: Depend on Domain, All Business use cases Contracts - Interfaces and Abstractions Features - Implement CQRS pattern and business concerns, use cases Behaviours - Validation, logging and other Cross cutting concerns
-
App and Domain are CORE, no external dependancy
-
Infrastructure on Application layer
-
API on Application and Infrastructure layers
-
ValueObject : Concept of Domain Driven Design
-
Jason Taylor, Gill Cleeren for Clean Architecture
-
ASP.NET Core API
-
Add Clean Architecture layers
-
Mediator Pattern - impelemented by MediatR nuget
-
Command, CommandHandler, Query, QueryHandler, CommandValidator [Use Nuget FluentValidation]
-
IPipeLineBehaviours - For X-Cutting concerns
-
Create Registration extension class in Application layer to be used for DI in Startup.cs
-
Add Automaper.Microsft.Extensions.DependancyInjection
-
Add FluentValidation.Microsft.Extensions.DependancyInjection
DB Layer
- DBContext from EntityFrameworkCore
- Code first approach
- IF EFCOre is replaced then only change the OrderContext object
EMAIL Layer
- Use Sendgrid library [100 mail/day free]
- c# - Get email settings from application setting so use IOptions
EF Core migrations for Code first approach
- Install-Package Microsoft.EntityFrameworkCore.Tools in Ordering.API
- Add-Migration InitialCreate
- dotnet tool install --global dotnet-ef
- Add "dotnet add package Microsoft.EntityFrameworkCore.Design --version 5.0.14" to project
- dotnet ef --startup-project ../Ordering.API/ migrations add InitialCreate
- Run Migrartion operaration on application startup automatically
sa SwN12345678
- Between Basket and Ordering API
Communication types
- Request Response
- Event Driven
- Hybrid
RabbitMQ
- Message Queue System, other examples Apache Kafka, Az Svc Bus, etc
- Producer => RMQ [Exchange (Direct, Topic, Fanout, Header) - Bindings (link bet. exh and Qs) - Queues] => Consumer, FIFO
- Exchange types control routing to Queues
- Direct -> Queue(s) with matching binding key -> One consumer
- Topic -> Msg r sent to diff Q according to different Q(s) depending on content (based on routing algo, wildcard), -> 1 or more consumers
- Fanout -> More that one Q (Broadcasting)
- Header -> routes messages based on arguments containing headers and optional values
Ports
- 5672 - RabbitMQ
- 15672 - Dashboard
Mass Transit - Message sender and receiver, help route messages over RabbitMQ, others
- Add EventBus.Messages project reference to Basket.API
- Modify Dockerfile to add Project
- Add Nuget MassTransit, MassTransit.RabbitMQ, MassTransit.AspNetCore
- Register in startup class
- Add EventBus.Messages project reference
- Modify Dockerfile to add Project
- Add Nuget MassTransit, MassTransit.RabbitMQ, MassTransit.AspNetCore
- Add Consumer side of code to Startup [Receiver endpoint]
- Sigle entry point to multiple services
- BFF - backend for frontend pattern
- routing [reverse proxy], authentication, authorization, load balancing, throttling, logging, tracing
- request aggregation headers/query string transformation, correlation pass thru,
- service discovery with eureka and consul
- cross cutting concerns or gateway offloading
- Multiple gateways should be used [for multiple client types]
Features
- lightweight, .net core based, opensource
- works with .net core only
- add empty .net core project
Authentication and Authorization
- Use Identity microservices
- aspnetrun github
Application
- Ocelot Nuget Package
- Add Asp.Net core empty project
- Configure for logging in program.cs
- Add ocelot.json, ocelot.Development.json, ocelot.Local.json
- Port: 5010, Environmet: Local
- Add ocelot.json to program.cs
- Rate limiting in Ocelot Gateway
- Response caching - install package Ocelot.Cache.CacheManager
Dockerize
- Change ocelot.?.json - all urls should have container name in the URLs
- Host => container name and port number should be 80 [default one]
- Add Dockerfile, amend docker-compose and docker-compose.override files
- Target multiple microservices rquests into single http request
- use IHttpclientfactory - Use typed client, build retry and circuit breaker policies
- single request, multiple requests to backend systems, single response
- Cross cutting security concern
- Authentication as a service
- Protect API with OAuth2.0, MVC Client app with OpenId connect
- Backing with OCELOT API Gateway
- Get WebApp from aspnetrun-basic github
- Remove Entities/Repositories/Migrations/Data folders
- Deploy and manage containers
- Add to docker-compose file
admin admin1234
- For distributed logging - search and analytics engine
- RESRful API, fast, open source, scalable
- Collect & Transform => Search & Analyse => Visualize & Manage [Logstash => ElasticSearch => Kibana]
- Serilog => ASP.NET Logging library, Sink to ElasticSearch
ASP.NET Logging is built in
- providers - console, eventsource, etc
- insert ILogger for logging
- Default logging to console
- 6 LogLevels - Trace = 0, Debug = 1, Information = 2, Warning = 3, Error = 4, Critical = 5, and None = 6.
- Log filtering -> appSettings.json add the namespace and logging level
Configure & Use
- use link the carlo
- docker-compose -> no need to specify network, default will be used
- http://localhost:9200/_aliases, http://localhost:9200/products/_search
Kibana
- Dev Tools - Queries
-
GET products/_search { "query": { "match_all": {} } }
-
POST products/_doc { "name": "iphone_y" }
- Sink to ElasticSearch and Kibana
- Add Serilog to common class library in building blocks
- Nuget
- serilog.aspnetcore
- serilog.enrichers.environment
- serilog.sinks.elasticsearch
- Override default logging behaviour before Host configure
- Kibana => "Connect to your Elasticsearch index", Create Index format => Create Index Pattern
- Kibana => Main menu => Discover
Add LoggingDelegatingHadler for intercepting request/response
- Use Http client message handler
-
Add serilog and elasticsearch config to appsettings.json
-
Add ref to common.loging
-
Modify program.cs and Startup.cs
-
register delegationhandler
-
Re-index in elasticsearch for new service -> select applogs-* index and refresh
-
Similarily add to all APIs and Gateway project
Ocelot Gateway
- Comment out existing logging code
- Add CommonLogging reference in dockerfile
- Apply Circuit breaker and retry pattern with Polly
- Architecting Cloud Native .NET Applications for Azure
- Microsoft.Extensions.Http.Polly NUGET package
- Retry, Circuit breaker, timeout, bulkhead isolation, cache, Fallback,
- NUGET - Polly
- Retry Pattern -> # of retries and waiting time before next attempt [should increase exponentially]
- Circuit breaker -> 3 mode ->
- closed [all requests go thru, monitor for errors],
- open [returns predefined error],
- half open [some requests sent thru]
-
Bulkhead design pattern -> aim to isolate error. Error in one service does not impact other service
-
SEQUENCE: Timeout -> Retry -> Circuit breaker -> Fallback -> Bulkhead
-
.AddTransientHttpErrorPolicy(policy => policy.WaitAndRetryAsync(3, _ => TimeSpan.FromSeconds(2))) ; // 3 retries and wait for 2 sec before retry
-
using polly to do advance configuration with advance configuration methods
-
Use AddPolicyHandler method with advance configuration
- Ordering.API
- Install NUGET polly
-
package aspnetcore.diagnostics.healthchecks
-
Start with Catalog.API
- Add AspNetCore.HealthChecks.MongoDb
- startup.cs - services.AddHealthChecks().AddMongoDb( ...
- Add middlewhere -> endpoints.MapHealthChecks("/hc", new HealthCheckOptions() { Predicate = _ => true, ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse });
- Basket.PI
- Add AspNetCore.HealthChecks.REDIS Nuget
- Use MassTransit for RabbitMQ Health Check
- Ordering.API - SQL Server and RabbitMQ Health check
- Use Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore
- Shopping.Aggregator
- Check internal healthchecks
- AddUrlGroup - Check target urls for health status
- NUGET - AspNetCore.HealthChecks.Uris
- aspnetrunbasics
- NUGET - AspNetCore.HealthChecks.Uris
- Use Watchdogs
- Add new microservice under WebApp - Asp.Net Core MVC, No HTTPS
- Amend Launch settings - Use Port 5007
- NUGET
- AspNetCore.HealthChecks.UI
- AspNetCore.HealthChecks.UI.Inmemory.storage
- Add configuration to appsettings.json
- Add "return Redirect("/healthchecks-ui");" to Homecontroller/Index method
docker ps -aq docker images -q
docker stop
docker rmi $(docker images -f "dangling=true" -q)
- image sizes of dangling and non-dangling images here:
docker system df -v
docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d
- Rebuild if change in any existing code docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d --build docker-compose -f docker-compose.yml -f docker-compose.override.yml down