Category Archives: Courses

Architecture – mini course

My notes for: https://youtu.be/NzcZcim9tp8

Intro

General solution – 6 projects:

  • Api

    This slideshow could not be started. Try refreshing the page or viewing it in another browser.

    (references to Infrastructure; and indirectly to Application and Domain)
  • Application [orchestrating domain services] (ref to Domain)
  • Domain [business objects]
  • Infrastructure [technical stuff] (ref to Application)
  • Shared
  • Shared.Abstraction

Concept:
In domain objects you shouldn’t have public getters and setters, but business-related-methods (DDD).
In application itself you are orchestrating domain services.
Cross-cutting or technical concerns should be in Infrastructure (direct ref to application and indirect to domain). Here we should implement stuff like database access, metrics, logging.
PresentationLayer – WebAPI, WPF, …

———-

DOMAIN LAYER (BUSINESS)

  • Entities folder – not EF entities, but objects with Id (technical or natural) property. Do not create anemic classes. Entity should validate itself and always be in correct state. Entity can have public getter for Id, the rest data should be private fields, ctor should be internal (factory should create instances). Entities are mutable.
    Imagine we have name field in the PackingList class passed via ctor. We don’t like to leak validaton somewhere outside (to the Application or Presentation layers). That’s why we don’t use string for this field, but create our ValueObject.
    There are 3 strategies to inform about valid/invalid entity state:
  • return result (boolean/ custom result) from methods.
  • create validator, but then we need to make our data public (public getters, so less encapsulation). Then you need to assume that sbdy outside will call this validator in every specific moment (like after AddItem method). So called deferred-validation.
  • just throw exceptions from inside the entity, self-validating entity. Slow down performance, but reliable.
  • ValueObjects folder – not mutable objects. Do not have Ids, so comparison must be done by comparing properties. User c# record, it implicitly implements IEquatable<>.
    ValueObject can have single public property Value. In ctor you should pass this value and validate it and throw custom exception, if not valid.
    You can add implicit operator from this ValueObject class to simple type i.e. string/int.
  • Exceptions folder – throw custom exceptions from domain rather than .net ones, i.e. EmptyPackingListNameException(). Thanks to that you will easily know what happen even without message. You can also recognize whether it comes from domain or infra. It’s easier to filter out your own exceptions.
    You can create base abstract exception in Shared.Abstractions. Then you need to add ref to Shared.Abstraction to the Domain.

Aggregator idea

Imagine there is a business rule that when A changes B also has to change, so the change must happen in transation. You could implement it somewhere about Domain layer in Application, but then business logic leak happens. You just need an aggregate – item which encapsulates some transations, is a consistency guard, use when you need atomic operation. Entity can be in the same time aggregate.
In Shared.Abstraction create Domain\AggregateRoot class (where T is publicly visible Id) and has public Version property. Version must be incremented when something in aggregate changes. Then you can inherit like this: PackingList : AggregateRoot

DomainEvent idea

Event that something important happen. Can be added to aggregate events list. It can simplify unit testing.
To Shared.Abstactions add interface IDomainEvent.
Expose collection of domainEvents in AggregateRoot and add protected AddEvent method which should add event and increment version if not done yet.
To Domain add Events folder and any event you need like record PackingItemAddedEvent(PackingList, PackingItem).
In our architecture (no public properties on aggregate/entity) we have domain events as an consequence -to recognize that something happen. In other case (when having public getters, maybe we would not need it).

  • Factories folder
    Factories should create aggregates. Single factory should create single aggregate. There can be multiple methods to create aggregate within single factory (i.e. CreateEmptyPackingList, CreatePackingListWithDefaultItem). If you need some external data to create aggregate (i.e. temperature from external service) Application should call the service and provide the temp to aggregate. Domain should not call some external IO stuff.
  • Policies folder
    We use policies in factories to create aggregate. Factory should get all policies via ctor. Create method in factory should filter policies and call applicable ones.
    In our case when we want to create packing list with default items we use policies for given gender and localization to decide which items should be added. We could have MaleGenderPolicy, FemaleGenderPolicy, HighTempPolicy, MountainsLocalizationPolicy, BasicPolicy. You can group policies in folders per ‘topic’.
    Policy should have IsApplicable(PolicyData) method. PolicyData is a record with i.e. temperature, gender, localization.
    Policy should also have GenerateItems(PolicyData) method.
  • Repositories folder
    Only interfaces. We don’t put any repository implementations of repositories in Domain. Interface per aggregate. It should define possible actions with the aggregate. In our case IPackingListRepository: Task GetAsync(PackingListId), Task AddAsync(PackingList), Task UpdateAsync(PackingList), Task DeleteAsync(PackingList);
    Guy says that generic repositories generally sucks.

————

APPLICATION LAYER (ORCHESTRATION)

Contains process logic (prepare input data, check if item you would like to create already exist, call some external API, …, call business layer).

Writes

Commands + CommandHandlers + CommandDispatcher (write models).
Command handling can mutate the data, but should not return any result (so should return void or Task (nongeneric)). Query should return value, but not mutate data.
Put abstractions in Shared.Abstractions – Commands folder. Add interfaces:

  • ICommand
  • ICommandHandler with method Task HandleAsync(TCommand, CancellationToken)
  • ICommandDispatcher (with method Task DispatchAsync(Tcommand)). You can use MediatR for that rather than writing your own implementation.

Put dispatcher implementation (i.e. InMemoryCommandDispatcher) in Shared project (but not abstractions). In ctor injest IServiceProvider to get appropriate handlers. In DispatchAsync create scope and just find handler in serviceProvider and call HandleAsync.
Put here also IServiceCollection extensions AddCommands to register dispatcher, commands and handlers in ServiceProvider. You can use Scrattor to scan your library and then use it to automatically register handlers.

In Application add IServicecollection extensions to register classes in .net IoC.
Implement .AddAplication() { services.AddCommands(); return service;}. Additionally register also Factories, Policies from Domain.
This method has to be called in hosting point, so API in our case (Startup.cs).

  • Commands folder – add here commands with imperative names: CreatePackingNameWithItems(guid id, string name, Gender genter, ushort days, LocalizationWriteModel localization). Suffix ‘command’ not needed).
    It should be a record implementing ICommand.
    On this level we don’t use value objects. It’s more transportation layer than domain layer.
  • Commands\Handlers folder – CreatePackingNameWithItemsHandler : ICommandHandler.
    HandleAsync method should be idempotent. If you call handler multiple times, there should be single list at the end. See ‘services’ folder below to see read service. Handler should get in ctor: packingListRepository, packingListReadService, packingListFactory. It should throw custom exceptions i.e. PackingListAlreadyExistsException.
    Reading data from db in command handler. Don’t use repository explicitly here. Repo should be contract on top of our aggregate. Repo should be related to business logic, and not pagination, searching for existence. Avoid growing up of the repository.
    Our command handler does: checking whether list already exist, requests weather from external service, uses factory to create aggregate, uses repository to save it.
    For other handlers like DeletePackingList or PackItem we just need repository.GetAsync(id) and then call method repo to delete or on aggregate to pack item.
  • Services folder – interfaces for application, to be implemented in Infrastructure.
    For example read services (interfaces only). I.e. IPackingListReadService with ExistsByNameAsync(string name). Then use this service in command handler to i.e. check whether something exist before storing it.
    There can be also services to get some external data needed to create aggregate, i.e. for requesting temperature we create IWeatherService.
    IWeatherService will return result.
  • DTO folder – object returned by services. In our case IWeatherService can return WeatherDTO. DTOs can be groupped in External folder (we are consumers of this DTO, something comes to our system) and Internal folder (our system creates this DTO). DTO should be a record.
  • Exceptions folder – exceptions on application level (thrown from the Application, orchestration/process logic). PackingListAlreadyExists, WeatherNotAvailable, …

Queries (read models)

  • Shared/Abstractions/Queries – queries for CQRS.
    IQuery and IQuery : IQuery.
    Additionally IQueryHandler with HandleAsync method.
    On top IQueryDispatcher with QueryAsync(Iquery query) method.
  • Shared\Queries – implementation of IQueryDispatcher – finding query handler to handle the queries. It should take IServiceProvider in ctor.
    Additionally implement IServiceCollection extension method AddQueries() (per analogy to commands).
  • Application\Queries – create new query GetPackingList : IQuery<>.
    Add to Application\DTO\Internal a DTOs which is consumed by query, like LocalizationDTO, PackingListDto, PackingItemDto. Notice we don’t use write model (aggregates) here.
    Adding here GetPackingListHandler to handle the query causes some issues. You should not use repository here. Repository is coupled with domain and writing. It should not be possible to write on query handler side. Searching is not our domain language, pagedResult is not our domain object. Adding next methods to readService like SearchByName, SearchByLocation, etc. is also not good idea (plenty methods, naming hell, still EF dbContext needed).

So there are two clean options to solve this:
a) split Application project into Application.Read and Application.Write and use EF db context only in reads. Then you can just use EF and you don’t have issue with possibility to use EF on write side (as there we should use aggregates).
b) leave queries in Application, but move Handlers to Infrastructure. In Infra we don’t care there is EF as this is technical, cross-cutting project. Author of the course has choosen this option.

INFRASTRUCTURE

Needs to configure EF to implement queryHandlers (ReadServices).
Needs to implement our WeatherService.
Handle custom exceptions (cross cutting concern).

*** \Queries
Put implementations here. Interfaces will stay in Application.
In API use implemented here AddQueries method (extension to register queries in IoC).

*** \EF
There will be two EF contexts.
One for writing. In our aggregate we have public Id and we just need to add private parameterless ctor for EF. All other fields in this model are private, so EF cannot use them (i.e. for filtering).
Second one for reading. We will have separate anemic models (EF\Models) with public getters, to be used in filtering paradigms.

* Read
Put models in EF\Models (LocalizationReadModel, PackingItemReadModel, …). Probably in all read models you will need Id, Version, and maybe some navigation properties.
Create ReadDbContext as well. Creating DbSets use anemic read models from Infrastructure.
All objects here should be internal, including dbcontext.

* Write
Add WriteDbContext. Creating DbSets use models from business domain (i.e. PackingList).

* Configure EF

  • EF\Config – map models with tables, point to PKs.
  • Add ReadConfiguration with implements IEntityTypeConfiguration and IEntityTypeConfiguration. Use methods like: builder.ToTable, builder.HasKey, builder.HasConversion.
  • Add WriteConfiguration the same way as ReadCOnfiguration, but probably i will be more complex.
    Both configurations and contexts must match (have the same tables and columns names). Ugly that we need to synchronize this.

Continue here – Register Db Context https://youtu.be/NzcZcim9tp8?t=20545

Main clues after browsing ready solution code

API
References to ‘Infrastructure’ only.
Simple API controllers calling ICommandDispatcher and IQueryDispatcher from ‘Shared.Abstractions’ and passing commands and queries from ‘Application’.

Infrastructure
References to ‘Application’ only.
Has EF Read/Write DbContexts, migrations, read models. All read-writes to db are here.
Implements QueryHandlers – reads from readDbContext, selects read models, maps and returns DTOs.
Does not implement CommandHandlers.
Implements repository which getsById and performas add/update/delete, but it’s only called in ‘Application’.
Has read service which checks whether entity exists by name, but it’s only called in ‘Application’.
Service calling an external API (to get data from third party app needed in our app).
Has logging decorator for ICommandHandler.

Application
References to ‘Domain’ and ‘Shared’
Has queries and commands, DTOs, business exceptions.
Implements commandHandlers, they call repositories from ‘Infrastructure’ (via interfaces of course) to get/ update aggregates from ‘Domain’.

Domain
References to Shared.Abstractions.
Has aggreates, domain events, domain exceptions, value objects.

Shared.Abstractions
Something like mediator interfaces (IQueryDispatcher, IQuery, ICommand)

Shared
Implemented query dispatcher form ‘Shared.Abstractions’, extensions to register it.
Registrations. App initialization.

ElasticSearch

How to install elastic search and kibana

https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html

If starting elastic in docker fails, in powershell try:

wsl -d docker-desktop
sysctl -w vm.max_map_count=262144

Following notes based on: https://www.youtube.com/playlist?list=PL_mJOmq4zsHZYAyK606y7wjQtC0aoE6Es

Cluster and node commands

get _cluster/health

get _nodes/stats

Index commands

# create new index
put favorite_candy

# get mapping for index
get shopping/_mapping

# add new doc, id autogenerated
post favorite_candy/_doc
{
  "name":"Adam",
  "age":33
}

# add new doc, id given
put favorite_candy/_doc/6
{
  "name":"John",
  "age":45
}

# get added doc
get favorite_candy/_doc/6

# add new doc, id given, throws exception if id already exists
put favorite_candy/_create/6
{
  "name":"John",
  "age":45
}

# delete docs from index
POST shopping/_delete_by_query
{
  "query": {
    "range": {
      "UnitPrice": {
        "lte": 0
      }
    }
  }
}

Leaf queries

# find all in index
get truckplanning/_search 

# find by range
get truckplanning/_search 
{
  "query": {
    "range": {
      "Date": {
        "gte":"12.06.2023",
        "lte":"13.06.2023"
      }
    }
  },
  "track_total_hits": true
}

# find any matching word in headline field
get news/_search
{
  "query": {
    "match": {
      "headline": "Shape of you"
    }
  }
}

# find exact phrase in headline field
get news/_search
{
  "query": {
    "match_phrase": {
      "headline": "Shape of you"
    }
  }
}

# multimatch - like match query, but in any of specified fields
# in this case find doc where Obama is an author or is mentioned in headline
get news/_search
{
  "query": {
    "multi_match": {
      "query": "Obama", 
      "fields": ["headline", "authors"]
    }
  }
}

# multimatch with field boosting
# search like above, but if Obama in author is found, it will get higher score
get news/_search
{
  "query": {
    "multi_match": {
      "query": "Obama", 
      "fields": ["headline", "authors^3"]
    }
  }
}

# multimatch for match_phrase (not just match, like before)
get news/_search
{
  "query": {
    "multi_match": {
      "query": "Barack Obama", 
      "fields": ["headline", "authors^3"], 
      "type": "phrase"
    }
  }
}

Compound queries

# combined query - you can mix many different conditions
# filter - decides whether doc is in or out of the results
# must - ranks higher docs which matches condition
# must_not - ranks highter dosc which does not match condition
# should - ranks higher docs which matches condition

# find news with Michelle Obama in headline and politics category
get news/_search
{
  "query": {
    "bool": {
      "must": [
          {
          "match": {
            "category": "POLITICS"
          }
          }, 
          {
              "match_phrase": {
              "headline": "Michelle Obama"
            }
          }
        ]      
    }
  }
}

# find news with Michelle Obama in headline but not in weddings category
get news/_search
{
  "query": {
    "bool": {
      "must": [
          {
              "match_phrase": {
              "headline": "Michelle Obama"
            }
          }
        ], 
        "must_not": [
          {
            "match": {
              "category": "WEDDINGS"
            }
          }
        ]
    }
  }
}

# find docs authored by Obama, but filter out everything outside 2015
get news/_search
{
  "query": {
    "bool": {
      "must": [
          {
              "match_phrase": {
              "authors": "Obama"
            }
          }
        ], 
      "filter": [
        {
          "range": {
            "date": {
              "gte": "2015-01-01",
              "lte": "2015-12-31"
            }
          }
        }
      ]
    }
  }
}

Metric aggregations

# group by category name
get news/_search
{
  "aggregations": {
    "by_category": {
      "terms": {
        "field": "category", 
        "size": 100 // how many categories to show
      }
    }
  }, 
  "track_total_hits": true
}

# sum and do not return top 10 docs, only aggregation value 
# you can calculate also: min, max, avg
get shopping/_search
{
  "size": 0,
  "aggs": {
    "total-qty": {
      "sum": {
        "field": "Quantity"
      }
    }
  }
}

# calculate all basic aggregations at once for field
get shopping/_search
{
  "aggs": {
    "unit-price-stats": {
      "stats": {
        "field": "UnitPrice"
      }
    }
  }
}

# unique count
get shopping/_search
{
  "aggs": {
    "uniqe-customers": {
      "cardinality": {
        "field": "CustomerID"
      }
    }
  }
}

# aggregation with query
# calculates average price in Germany
get shopping/_search
{
  "query": {
    "match": {
      "Country": "Germany"
    }
  },
  "aggs": {
    "avg-price-germany": {
      "avg": {
        "field": "UnitPrice"
      }
    }
  }, 
    "track_total_hits": true
}

Bucket aggregation

#date histogram - grouping by dates
#fixed interval - each time group is the same size (30 minutes, 8 hours, ...)
get shopping/_search
{
  "aggs": {
    "shopping-per-shift": {
      "date_histogram": {
        "field": "InvoiceDate", 
        "fixed_interval": "8h"
      }
    }
  }, 
    "track_total_hits": true
}

#date histogram - grouping by dates
#calendar interval - use calendar unit (1d, 1w, 1M, 1q, 1y)
get shopping/_search
{
  "aggs": {
    "shopping-per-day": {
      "date_histogram": {
        "field": "InvoiceDate",
        "calendar_interval": "1M", 
        "order": { //sort groups
          "_key": "desc"
          //"_count": "desc"
        }
      }
    }
  }, 
    "track_total_hits": true
}

#histogram by metric field - grouping by any metric field
# group transactions by unit prices
get shopping/_search
{
  "size": 0,
  "aggs": {
    "shopping-per-price": {
      "histogram": {
        "field": "UnitPrice",
        "interval": "1000", 
        "order": {
          "_key": "desc"
        }
      }
    }
  }, 
    "track_total_hits": true
}

#range aggregations - group by custom ranges
get shopping/_search
{
  "size": 0,
  "aggs": {
    "shopping-per-price": {
      "range": {
        "field": "UnitPrice",
        "ranges": [
          {
            "to": 50
          },
          {
            "from": 50,
            "to": 500 
          }, 
          {
            "from": 500,
            "to": 1000 
          },        
          {
            "from": 1000
          }
        ]
      }
    }
  }, 
    "track_total_hits": true
}

# terms aggregations - group by term (text field)
# find top 3 shopping countries
get shopping/_search
{
  "size": 0,
  "aggs": {
    "top-shopping-countries": {
      "terms": {
        "field": "Country",
        "order": {
          "_count": "desc"
        }, 
        "size": 3
      }
    }
  }, 
    "track_total_hits": true
}

Combined aggregations

# buckets and metric aggregation with script
# first aggregate by date then sum in each date range
# sum value returned by script
get shopping/_search
{
  "aggs": {
    "shopping-per-month": {
      "date_histogram": {
        "field": "InvoiceDate",
        "calendar_interval": "1M", 
        "order": {
          "_key": "asc"
        }
      }, 
      "aggs": {
        "revenue-per-month": {
          "sum": {
            "script": {
              "source": "doc['UnitPrice'].value * doc['Quantity'].value"
            }
          }
        }
      }
    }
  }, 
    "track_total_hits": true
}

#multiple subaggregations with sorting
#revenue per month and unique customers per month
#max revenue on top
get shopping/_search
{
  "aggs": {
    "shopping-per-day": {
      "date_histogram": {
        "field": "InvoiceDate",
        "calendar_interval": "1M", 
        "order": {
          "revenue-per-month": "desc"
        }
      }, 
      "aggs": {
        "revenue-per-month": {
          "sum": {
            "script": {
              "source": "doc['UnitPrice'].value * doc['Quantity'].value"
            }
          }
        }, 
        "uq-customers-per-month": {
          "cardinality": {
            "field": "CustomerID"
          }
        }
      }
    }
  }, 
    "track_total_hits": true
}

Mapping

Mapping is done dynamically by elastic, if you don’t create your custom mapping.

You can create mapping before inserting any data. After that if you want to change mapping for an existing field, you need to create new index, create mapping, reindex old index.

Filed types:

  • text – string field, used for full text search. Such field passes through analyser which splits text into tokens, lower cases it, removes punktuation marks.
  • keyword – string field, used for exact search, aggregations, sorting. Original values are stored, not analyzed.
# display mapping
get shopping/_mapping

#create mapping for index
PUT shopping2
{
 "mappings": {// your mappings
}

#reindex after mapping change
POST _reindex
{
 "source" : { "index": "shopping1"}, 
 "dest": {"index": "shopping2"}
}

#mapping for runtime field (like calculated column in SQL)
PUT shopping2/_mapping
{
  "runtime": {
    "total": {
      "type": "double",
      "script": {
        "source": "emit(doc['unit_price'].value* doc['quantity'].value)"
      }
    }
  }
}

Microservices – notes from Les Jackson course

Here I have some short notes made while watching microservice course.

The original video:

Building ASP.NET API

Just create dbContext with entities, after that repository which takes this dbContext. Add DTOs, automapper and it’s profiles to map DTO<->Entities. Add controller which uses repository and automapper to handle actions.

Dockerize ASP.NET application

Generally follow these instructions.

Example dockerfile:

# base on .net sdk image to build your app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /source

# copy your csproj(s) and restore nuget packages
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore

# copy everything else and build and publish app
COPY aspnetapp/. ./aspnetapp/
WORKDIR /source/aspnetapp
RUN dotnet publish -c release -o /app --no-restore

# final stage/image; here base on image without sdk to make it smaller
# just copy and run your .net library (API project)
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "aspnetapp.dll"]

Most used docker commands:

docker build -t dockerhubname/myappname . # builds image from dockerfile

docker run -p 8080:80 -d dockerhubname/myappname # runs a new container using given image, after that you can hit you API on port 8080

docker stop containerId #stops running container

docker start containerId #starts existing stopped container

docker push dockerhubname/myappname #pushes image to docker hub

Kubernetes

Kubernetes (K8S) is orchestrating docker containers. There is simpler option to achieve this – docker compose.

In our simple case we have single cluster with single node in it. In node we have pods. Pods can run containers (again in our case in one pod we have one container, so one pod – one API project). Each Pod has got it’s Cluster IP.

Node port – is a service used in dev env to test whether everything works. Not used on prod.

Load balancer – is a service for load balancing. It will hit Pod with Ingress Nginx Container (proxy/gateway).

In K8S you can set things imperative (write commands) and declarative way (write config file).

Create myapp-depl.yaml to create POD declarative way. K8S uses REST API under the hood to create and destroy services. In ‘spec’ part of yaml file you configure your pod (dockerimage name, how many replicas/instances).

In the same file you can later define clusterIP (protocol, ports mapping) to communicate between PODs. While defining clusterIP you define a name. This name should be copied into you api config ‘appsettings.{Production}.json’ as you can need it for communiction between API services. I.e. appsettings.Development.json you can have ‘CommandsServiceUrl: http://localhost/api/&#8217; and in appsettings.Production.json you can have ‘CommandsServiceUrl: http://commands-clusterip-srv:80/api/&#8217;

To enable kubernetes, go to docker hub and in options set enable kubernetes.

To see it running in cmd: kubectl version

To create POD in cmd: kubectl apply -f myapp-deploy.yaml

To check it in cmd: kubectl get deployments (or kubectl get pods)

You can list namespaces: kubectl get namespace

After a moment you can also check in docker that there is running container (cmd or desktop app).

Create myapp -np-srv.yaml to create Node Port (to allow access from outside to a POD) and define port mapping (internal node port – container; you don’t define external port clients use to reach your API).

To create Node Port in cmd: kubectl apply -f myapp-np-srv.yaml

To check it in cmd: kubectl get services

Synchronous Messaging

We use HTTP REST API in this course. There are two API services (Platforms and Commands). When PlatformService creates new platform we want to send this info to CommandsService. We create HttpCommandDataClient using HttpClient and IConfiguration. We implement method SendPlatformToCommand(PlatformReadDto p). In ConfigureServices we register it: services.AddHttpClient<IHttpCommandDataClient, …>()

API Gateway

We use ingress nginx container for kubernetes. Go to it’s github’s installation page and just execute kubectl apply command. It will create pod and load balancer (but not in default namespace).

You need to create routing file for ingress-nginx to route to our API services: ingress-srv.yaml Generally we map here hostname (you can put any host name here and configure your machine hosts file to redirect it to localhost) and specific paths (i.e. /api/platforms) to specific clusterIp and port. It means if somebody hits this path, route him to specific service on defined clusterIP.

Then kubectl apply -f ingress-srv.yaml

Persistent Volume Claim

PVC is needed to put db there, outside of a container which can be restarted and all it’s data is lost. We create here local-pvc.yaml (define size and access mode), then kubectl apply -f local-pvs.yaml and check it by kubectl get pvc and kubectl get storageclass.

To create secret in K8S: kubectl create secret generic mssql –from-literal=SA_PASSWORD=”pwd”. Then secret name is ‘mssql’ and value is ‘pwd’.

MessageBroker – RabbitMQ

RabbitMQ exchanges:

  • Direct – based on routing key, unicast, to specific queue
  • Fanout – broadcast, to all queues
  • Topic
  • Header

To be continued:

.https://youtu.be/DgVjEo3OGBI?t=26910

Angular – basics

Courses I learnt from:

Creating and communicating between components

Every created component must be registered in it’s module’s declarations.

In Html template you can use:

  • string interpolation (one-way) {{ }} to display data in component;
  • property binding [property]=”value” to assign values;
  • event binding (click)=”handleClicked($event)” to handle events;

Passing parameters from parent component to child.

In child use @Input decorator: @Input() myProperty: any. In parent bind using: <child-component [myProperty]=”myObjectToPass”>.

Event handling

<button (click)=”handleClick($event)” />

Passing data form child to parent

In child create an event and raise it in any moment.

@Output() myEvent = new EventEmitter();

this.myEvent.emit(‘any info here’);

In parent handle that event.

<child-component (myEvent)=”handleChildEvent($event)” />

Template variables – another way to access child component properties and methods.

In parent define the variable: <child-component #theChild />

And use it <span> {{ theChild.Name }} </span>

Exploring the Angular Template Syntax

*ngFor – use repeat something in loop:

<div *ngFor="let item in items"><child-component [theItem]="item" />

*ngIf – use to handle conditions, i.e. do not print something when there is no data, i.e. <div *ngIf=”user?.name” />. It full removes items from the HTML structure, not just hide them.

[hidden] – use to hide element in structure, but not fully remove from DOM, i.e. <div [hidden]=”!user?.name” />

*ngSwitch – typical switch, i.e. <div [ngSwitch]=”user?.location” ><span *ngSwitchCase=”city”>

*ngClass and *ngStyle – use to bind CSS.

<div [class.green]="user?.location === 'village' /> <!-- applies css class 'green' if condition is met. -->

<div [ngClass] = "{green: user?.location === 'village', bold: user?.important === 'village'} /> <!--applies green and bold classes when conditions are met -->

<div [ngClass] = "getCssClasses()" /> and in ts:

getCssClasses() {if (anyCondition) return 'green bold' else ''}

Creating reusable services

Create service

Create injectable service: @Injectable() export class MyService { getItems(){}; }

Add service in module as a provider to make it injectable for angular.

Use service in ctor: constructor(private theService: MyService) {}

Wrap 3rd party component into a service

Install a node module, i.e. npm install toastr –save. Import styles and scripts in angular.json from this node_modules module path. Create new service, declare global variable toaster and wrap all it’s methods in service. Register new service as a provider and use your service.

Routing and Navigation

Use *-routing.module.ts to define routes, redirections, quards.

ActivatedRoute – inject it in ctor to use i.e. current route param: this.route.snapshot.params['id'] for route like /my-route/:id

[routerLink] – use to navigate from html, i.e. <div [routerLink]="['/items', item.id]" routerLinkActive="active-link-style" [routerLinkActiveOptions]="{options: exact=true}">

[Router] – service to navigate from code, i.e. this.router.navigate(['/items'])

RouteGuard – disallows user to enter or leave a page.

Create service EventRouteActivator implements CanActivate. canActivate method must be implemented according to your logic. Just use router and navigate to any error page when page is not allowed. Return true when user can access page. Add service to the providers. In routing add canActivate: [EventRouteActivator]. The same way you can use canDeactivate. Instead services you can use functions in routing as well.

Resolver – use to load data earlier or do some prechecks before loading component. Imagine you have component which displays some items. Typically you could call some API in onInit to get this data. But you can use resolver to make it even before initalization of the component. In routing define resolver, it’s just a service to be implemented which returns data as observable. It will execute resolver service and put result (items) in route parameter. In onInit you can just read already returned data. In consequence component will not appear until data cames.

I.e. in routing file:

//in routing
{path: "my-item/:itemId", resolver: {item: MyItemResolver}}

//in resolver injectable service, only one method to be implemented:
resolve(route: ActivatedRouteSnapshot) {
return this.itemsService.getEvent(route.params['itemId']);
}

//and then in component ts get resolved data: 
ngOnInit() {
this.item = this.route.snapshot.data['item'];
//or
this.route.data.forEach((i) => this.item = data['item'];

Collecting and validating data with forms

Forms building blocks:

  • FormControl – corresponds to single input, provides changes tracking, validation
  • FormGroup – collection of FormControls. Each FormControl is a property in group (name is the key). Group just aggregates controls and allows for example easier validation check.
  • FormArray – similar to FormGroup, but the array

FormBuilder helps to build these blocks.

let address= new FormGroup({
    street : new FormControl(""),
    city : new FormControl(""),
    pinCode : new FormControl("")
}); 

let addressValue = address.value;
let streetValue = address.get("street").value;
let streetValue2 = address.street.value;
let addressIsValid = address.valid;

There are two types of forms in angular:

  • template-based – for simple scenarios, fully created in html
  • model-based (reactive) -logic in component rather than in html complex scenarios, to avoid logic in html and make unit testing possible

Template-based form

Here we configure everything for the form in HTML template. We use ngForm directive. It creates for us top-level FormGroup instance, instances of FormControls for each input with ngModel directive, FormGroup instances for each NgModelGroup directive.

ngModel – bind data into form control to model:

  • one-way binding (form to model) <input (ngModel)="itemName" name="itemName" id="itemName" type="text" />
  • two-way binding <input [(ngModel)]="itemName" name="itemName" id="itemName" type="text" />

ngSubmit – for submitting data in form: <form #myForm="ngForm" (ngSubmit)="sendIt(myForm.values)"> <!–#myForm syntax create template variable, you can use it in other place in you template i.e. myForm.form.valid or in ts code by @ViewChild(‘myForm’, null) theForm: NgForm.

Validation (template-based) – form and it’s fields have a few usefull properties: valid, pristine, dirty (if somebody typed something into the field), touched (when somebody enter and left the field), invalid.

<form #itemsForm="ngForm" (ngSubmit)="saveIt(itemsForm.value)">
<em *ngIf="myForm.controls.itemName.invalid">Required</em>
<input [(ngModel)]="itemName" name="itemName" id="itemName" type="text" pattern="[A-Z]" required />

<button type="submit" [disabled]="myForm.invalid">Send</button>

Model-based form

Here we configure everything for the form in code (new component ts file) and do some configuration in html template. In component ts add FormGroup and FormControls in it. Then in html bind it, more or less like this:

myForm: FormGroup 
let myItem = new FormControl(this.itemName, [Validators.Required, Validators.pattern('[A-Z]')])
this.MyForm= new FormGroup( {itemName: myItem})
<form [formGroup]="myFormGroup"><input formControlName="itemName">

You can set or patch values of a FormControl, FormGroup or FormArray. Using set you must update all values of a group, patch can update only some of them.

//set values
    let address= {
      city: "My City",
      street: "My Street",
      pincode: "00001",
    };
    this.reactiveForm.get("address").setValue(address); 

//patch value
    let address= {
      street: "New street",
     };
     this.reactiveForm.get("address").patchValue(address);

Custom validator – just a function which return null if control is valid or an error object when it is invalid, i.e. private restrictedWords(control: FormControl): {[key: string]: any} { if() return null else {'restrictedWords': 'any error message'}}.

You can subscribe to validation status changes:

//subscribe to the event which provides validation changes
this.myForm.statusChanges.subscribe(newStatus => {
    console.log(newStatus)
})

//you can disable emitting this event while setting value:
this.myForm.get("name").setValue("", { emitEvent: false });

//you can emit status for control, but not bubble it to the parent form: 
this.myForm.get("firstname").setValue("", { onlySelf: true });

You can subscribe to value changes:

this.myForm.get("firstname").valueChanges.subscribe(x => {
   console.log('firstname value changed')
   console.log(x)
})

Reusing components with Content Projection

Imagine popup dialog component with different contents inside. We would like to reuse dialog and pass different contents.

ng-content – use to replace content in some generic component. You can use selectors when you want to have multiple contents, i.e. <ng-content selector=”[ct-title]”> and then in specific content you need to have something like: <span [ct-title]>…

<!-- parent container with span content
<div class="my-container" [title]="item.name">
<my-content-component>
<span>{{item.description}}</span>
<span>any specific content here</span>
</my-content-component>

<!-- reusable my-content-component -->
<div class="importantContent">
<span>{{title}}</span>
<ng-content></ng-content> <!-- here the specific content will appear automatically -->

Displaying Data with Pipes

Use pipes to format data,i.e. {{title | uppercase }}, {{startDate | date:’short’}}, {{price | currency:’USD’ }}

Custom pipe:

@Pipe({name: 'duration'})
export class DurationPipe implements PipeTransform {
transform(value: number): string {
... any logic to return string
}
}
// add it to module declarations
// use: {{myNumber | duration}}

Filtering and sorting

Do not use pipes for that, to avoid performance issues.

Filter – generally you should have in your component field: filteBy: string, items: Items[], visibleItems: Items[]. Create filterItems() method:

filterItems(string filter) {
if (filter == 'all') {
this.visibleItems = this.items.slice(0); // just copy the array
} else {
this.visibleItems = this.items.filter(i => {return i.name.toLocaleLowercase() === filter})
}
}

Sorting – similar way like filtering

//filter on your own in the component:
this.visibleItems = this.sortBy === 'name' ? this.visibleItems.sort(sortByNameAsc) : this.visibleItems.sort(sortByDateAsc)

//
function sortByNameAsc(Item i1, Item i2) {
 if (i1.name > i2.name) return 1;
...
}

ChangeDetectionRef – component to detect changes in model and refresh the view.

//two strategies
//by default it refreshes the view when data has been modified
//onPush refreshes only when property has been assigned with new reference (new object), but mutation is not refreshing
@Component({
  templateUrl: './customers.component.html',
  styleUrls: ['./customers.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush,
})

//when using OnPush, you can force refresh on demand
public constructor(private cdr: ChangeDetectorRef){}
public Refresh: void() {this.cdr.detectChanges();}

//there is also option to add specific input to be checked when next change detection will happen
this.cdr.markForCheck();
//two strategies
//by default it refreshes the view when data has been modified
//onPush refreshes only when property has been assigned with new reference (new object), but mutation is not refreshing
@Component({
  templateUrl: './customers.component.html',
  styleUrls: ['./customers.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush,
})

//when using OnPush, you can force refresh on demand
public constructor(private cdr: ChangeDetectorRef){}
public Refresh: void() {this.cdr.detectChanges();}

//there is option to detach the view from change detection
this.cdr.detach()

//there is also option to add to be checked when next change detection will happen, often used in subscriptions
this.cdr.markForCheck();

Angular’s Dependency Injection

If you need to inject any class, you need to register it in providers and then pass to ctor.

InjectionToken – key for dependency injection system. You can use it to register in dependency injection an existing object (useValue).

//
//in service
export let TOASTR_TOKEN = new InjectionToken<Toastr>('toastr') //create interface Toastr for this purpose

//in app module
//create toastr:
declare let toastr: Toastr
and add it to provider: 
{provide: TOASTR_TOKEN, useValue: toastr}

//then inject TOASTR_TOKEN where you need it in component
import {TOASTR_TOKEN, Toastr} from ....
constructor(@Inject(TOASTR_TOKEN) private toastr: Toastr)

UseClass – specific way to register providers. Decide which implementation should be passed when somebody injects a service, i.e. in providers do {provide Logger, useClass: FileLogger}

UseExisting

UseFactory

Directives and Advanced Components

Use directives to have some logic which you don’t like to bind with component where you use it.

<button my-test="any message">Do</button><!--directive usage-->

@Directive({
selector: '[my-test]'
})
export class MyTestDirective implements OnInit {
private el: HTMLElement;
@Input('my-test') myTest: string; //receive directive parameter ('any message')
constructor(ref: ElementRef) {
this.el = ref.nativeElement; //gets element to which directive is bound
}
ngOnInit() {
//any logic
console.log(myTest);

You can use ViewChild to reference HTML element in componentCode:

<!--in html template-->
<div #myDiv>something</div>

//then in ts
@ViewChild('myDiv') el: ElementRef;
//there is also @ViewChildren and @ContentChild, @ContentChildren (for ng-content scenario)

Add simple setting on component:

<!-- in html template -->
<div logOnClick="true">anything</div>

//then use it in ts
@Input() logOnClick: string;
...
if (logOnClick) {
console.log('hello');

More Components and Custom Validators

To validate multiple related fields you need to write your own directive. Imagine you have form with phycisal location (state, city, street) or online location (url) – and one of these must be filled.

@Directive({
selector: '[validateLocation]',
providers: [{provide: NG_VALIDATORS, useExisting: LocationValidator, multi: true}] //register our validator in validators registry
})
export class LocationValidator implements Validator {
validate(formGroup: FormGroup): {[key: string]: any} {
//find appropriate controls in template
let addressControl =  formGroup.controls['address'];
let countryControl =  formGroup.controls['country'];
let cityControl =  formGroup.controls['city'];
let onlineUrlControl = (<FormGroup>formGroup.root).controls['onlineUrl'];

//check their values
if((onlineUrlControl && onlineUrlControl.value)) || ...) {
return null; //validation succeeded
}
return {validateLocation: false} //validation fails

<!-- then use it in html template-->
<div ngModelGroup="location" #locationGroup="ngModelGroup" validateLocation>

Communicating with the Server, HTTP, Observables, Rx

Promise – represents single value in the future.

Observable – represents zero or more values now or in the future.

RxJS Pipe operator

We use observable to work with data streams. It is concept from RxJS module. Usually we use Pipe operator to handle observable data.

There is nice tutorial about all of them: https://www.tektutorialshub.com/angular-tutorial/#observable-in-angular

Here are some remarks about a few of them.

switchMap

Projects each value of source observable to another result observable. When it subscribes to new result observable it unsubscribes from source one. It can be used to read ActivatedRoute.paramMap parameter only once and do not listen to it’s next changes. Similarly it can be used to handle ValueChanges event.

ngOnInit() {
 
    this.activatedRoute.paramMap
      .pipe(
        switchMap((params: Params) => {
          return this.service.getProduct(params.get('id'))
        }
        ))
      .subscribe((product: Product) => this.product = product);
}

mergeMap – from source observable items creates result observables. Does not cancel or unsubscribe anything, just combines all results without any ordering. Can be used to merge multiple HTTP calls. Imagine you have first call which returns observable of main categories and for each category you want to ask for all subcategories and merge them into one observable. You can use forkJoin to merge multiple inner observables

concatMap – mergeMap but maintains the order of result observables.

exhaustMap – from each source observable item creates new observable. But! While receiving items from new observable ignores the values from source observable. When new observable has completed, it again considers next value from source observable. Use to ignore duplicated events, i.e. user clicks button twice, but the second time will be ignored if the handling for the first one did not finish yet.

scan – for each source observable item does some accumulative function and it’s result is an input for handling the next item. For example for each item can log sum of all items which came so far.

reduce – like scan, but returns only the last result, when observable completes.

RxJS Subject

Subject is observer and observable in one. It can publish values and subscribe to them.

BehaviorSubject – when publishing values stores the last one. When next subscriber subscribes it immediately gets the last published value.

ReplaySubject – like BehaviorSubject but new subscriber gets all published item, not only the last one.

AsyncSubject – as BehaviorSubject, but publishes last value only when it’s completed observable.

AZ-900 preparation notes

Cloud Concepts

Payment options:

  • CapEx (Capital Expenditure) – paying in advance for resources (like server, storage). Similar to on-premise environment, prepurchasing. It’s constant value each period. Not flexible.
  • OpEx (Operational Expenditure)- paying for consumed resources as we use them. Payment dependent on consumption, not constant.

Categories of cloud services:

Shared responsibility model. Some responsibility on your side and some on cloud side. There are many layers:

  • Storage, Network, Server, VM, (in IaaS Azure is responsible for these layers, but the rest is on you)
  • OS, Runtime (.nET, Java including Serverless), ( in PaaS Azure is responsible for these layers as well),
  • App, Data.
  • On-premise. You are responsible for all there layers.
  • IaaS. Layers from Storage to VM are in cloud. Rest is yours (i.e. picking OS: win/linux?).
  • PaaS. Layers from Storage to Runtime are in cloud. On your side the App and Data. Serverless is also kind of PaaS (event happens [store into db, publish event into the queue] which triggers action in the cloud).
  • SaaS. All layers in the cloud. So the software is there, like Microsoft 365. You have no responsibility.

Types of Cloud Computing

  • public (i.e. Azure)
  • private
  • hybrid (use public and private)

Reliability and predictability

  • Reliability (niezawodność) – autohealing after failure; autoscale when traffic peak;
  • Predictability (przewidywalność) -;

Describe Azure Architecture and Services

Regions and Region Pairs

Region definition: group of data centers where data travels in <2ms latency between them. Choosing region considerations:

  • performance (pick location close to your customers)
  • law regulations
  • resiliency (use different regions to avoid regional outages)

Region pairs are a group of regions in the same political boundaries where data is replicated. It’s for disaster recovery – pairs must be far away for protection reasons, but close enough to be in the same law regulations. See cross-region replication table: https://docs.microsoft.com/en-us/azure/availability-zones/cross-region-replication-azure

There are different environments. There is the default Az Cloud (commercial), but also AzUsGov, AzChina, AzGermany (it’s about compliance).

Availability zones

Data centers need power, cooling, networking. Every (almost) region has 3 availability zones. Zone is separate building provided with resources (power, cooling, …). It’s for resiliency concerns to be protected from single building (data center) issues.

Some services are ‘zone redundant’ – redistributes resources through all zones. ‘Zonal’ services are just in single zone (1 or 2 or 3) for example single VM.

Notice there are 3 availability zones in region to pick, but it does not mean there are 3 buildings in region, (in reality there is much more).

Resource group

In a single resource group you can have multiple regions and multiple resource types (like VM, network interface, disk, public ip). Resource groups cannot be nested. You can move resources between groups. Resource group is not a boundary for services. Groups are for organizational purposes. In a group you should have resources which share lifecycle (like vm + disk). You can create policies on group (like use only specific region, create only specific services, budget limit).

Subscriptions

Agreement between user and MS with certain billing model.

  • one subscription trusts only one tenant in Azure Active Directory (Azure AD tenant = directory)
  • you can apply budget, roles, policies. These settings are inherited in resource group
  • there can be many resources groups in one subscription, resource group lives in subscription
  • there are plenty limits in subscription (like max 980 resource groups). For small company one subscription is enough. For example you can have Test and Prod subscriptions.

Management groups

It helps to manage group of subscriptions. Manage groups can be hierarchical (root -> dev / prod -> pl/ de/uk) To management group you can apply:

  • budget,
  • policy,
  • roles based access.

Resource manager

ARM – it’s an RESTful enpdoint, interaction layer. Whatever service you are using (via portal or cli) it goes through ARM.

Notice in portal there is cloud shell (web command-line, you don’t need to install anything to use it).

You can create specification in JSON file (ARM JSON template) and send it through ARM to create resource. Looking at resource via the portal you can export such JSON template which you could need to recreate such a resource.

BICEP file is more human friendly file which is translated to JSON ARM.

Azure Arc

Arc extends cloud control via ARM for other (not Azure) clouds or on-premise services. Means you can use ARM to manage on-premise, AWS, … cloud (?).

Resources required by VM

VM should live in resource group of existing subscription. For VM we need:

  • managed disk (to store OS)
  • data disks (optionally)
  • vNIC (Virtual Network Interface) to connect to VirtualNetwork (with up to 2 subnets)
  • Public IP (bind it to IP configuration)
  • Network Security Group (prefered at subnet level)

Core computing resources

IAAS like resources

VM Virtual machine – use it when you need full access to OS (to configure it in specific way). There are different SKUs (specification based on goal – memory optimised/ computing optimised/ …).

VMSS (VM Scale Set) – VM from template (which OS to use, VM config (how many CPU, RAM), scaling (min-max, how/when to add/ remove CPUs/RAM).

Azure Batch – submit the job, azure will configure VM for you.

PAAS like resources

ACI Azure Container Instances – containers, you need to pick container image (public or your own) from registry. You can create container group. VM virtualizes hardware, but container virtualizes software.

AKS Azure Kubernetes Service – containers orchestration. In the background it creates VMSS for data.

Azure App Services – when you have an app to host (like API, web app, mobile app), it’s also kind of container. You just pick runtime, os and that’s it, push your code. In the background there is still some VM and you pay for it.

Function App (serverless) – run code you have written when something happens (on schedule, REST API call, ). Stateless code.

Logic App (serverless) – there is some state comparing to FunctionalApp. Don’t really need to be a developer to use it, there is graphical designer.

AVD Azure Virtual Desktop – provides desktop experience and publishes particular applications.

Core networking resources

Virtual Network resource exists in single subscription and region (cannot span regions) but spans Availability Zones. There must be at least one set of IPv4 addresses. Optionally you can add range of IPv6 addresses.

VN can be splited into subnets – portion of defined IP space (192.168.1.0/24 what gives 256-5 IP addresses). Azure internally needs 5 addresses for technical purposes (first Ip, last Ip, DNS x2, gateway). Then you can assign resources (like VM) to specific subnet.

Public IP Resource – add it to expose resource publicly. Usually we attach load balancer to public IP, less often VM.

Many VNets

If you have many VNets in many different regions always use unique IP ranges that they not overlap across different VNets. Then you can Peer them to allow connectivity between VNets.

To connect on premise environment and VNet in Azure there are two main options:

  • create VPN over the internet and in AZ you have to add VPN gateway and on premise you need to have VPN server. There are two types of VPN gateways: ‘policy based’ – for legacy solutions and ‘route based’ – default modern solution.
  • there are MeetMe points with ExpressRoute in AzureNetwork, it’s kind of private connection. Configuring ExpressRoute you decide what you want to connect with.

Resoucres outside of VNet – like Storage account.

Storage account can be i.e. a database. How to communicate to this from VNET:

  • In subnet you have to enable ServiceEndpoint for Storage. Then storage knows subnet and subnet knows storage. StorageAccount has an firewall which can allow connection from defined subnet.
  • Store account can be connected as a PrivateEndpoint. Subnet assigns single IP from it’s range to the storage (without using Public IP).

Public and Private Endpoint

  • public is internet routable. There is often firewall. You can make a ServiceEndpoint and make subnet a known entity to a public endpoint to allow it on firewall.
  • private – it’s an IP in the subnet. It’s a connection between resource and specific subnet

Security in VN

  • NSG – network security group, firewall-like rules, deny/allow specific IP/protocol.
  • Az Firewall – higher layer rules than NSG, filtering out specific hostnames, applications
  • DDoS protection – basic one is free, but works only for huge attacks, if you want something more customizable you need to go to standard plan.

Storage Account resources

It leaves in specific region. Would be nice to have storage account in the same region as computing item which uses it.

Performance options:

  • standard – all redundancy options (LRS, ZRS, GRS, GZRS),
  • premium – fast, but no redundancy except LRS.

Redundancy:

  • LRS – 3 copies of data in the same building (availability zone)
  • ZRS – 3 copies of data in 3 different buildings (availability zones)
  • GRS – 3 copies of data in the same AZ and next 3 copies in paired region (another AZ); not available in premium performance
  • GZRS – 3 copies in tree different AZs in the same region + other 3 in paired region; not available in premium performance.

Services types:

  • Blob – unstructured piece of data. Base structure for data lake.
    • block – any type of unstructured data, lives in a container.
    • page. There are disks with different types (i.e. premium ssd, standard hdd, ultradisk) and sizes; sizes can be changed during use.
    • append (great for log)
  • Files – SMP/NFS protocols, sharing files
  • Queues – small messages, for event driven logic, FIFO
  • Tables – key-value pairs, schemaless

Block have access tiers (hot – constantly needed; cool – not often need; archived – not online, cheapest). Lifecycle of data – moving between tiers. In hot we pay less for transaction but more for space. On the other end in archive we pay less for space, but more for transactions.

Database Resources

Based on MS SQL Server offerings:

  • Azure SQL DB (PaaS service, multitenant).
  • Azure SQL MI (Managed Instance, works in your VNET, has more advanced features, use if you are moving from on-premise and use advanced features, there is webpage with comparison of features between Mi and Paas service).

Based on open source offerings:

  • MySQL
  • PostgreSQL
  • MariaDB

CosmosDB (no-sql, no-relational, multi models of data):

  • born in the cloud
  • multi-model (documents-mongodb, columns-cassandra, tables, graph-gremlin)
  • multi-consistency (how to copy data between instances: strong/ session/ eventual)

Data movement and migration options:

Online tools:

  • SMB file share on Windows Server -> in Azure it’s Azure Files. You can use Azure File Sync with SyncGroup. You can do cloud tiering (move not used recently files from on-premise server to Azure).
  • Azure Storage explorer – interactive UI in azure service
  • Azure Copy – to sync/copy data premise/azure and cloud/cloud, can be automated.
  • Azure Migrate – move VM with DB

Offline tools (for huge amount of data):

  • Azure Data Box:
    • disk – they send you physical disk you copy the data and send back
    • box – more or less the same
    • box heavy – …

Azure Market Place

Products/offerings from Ms or 3rd party for easier deploy.

Products like VM machine with some specific software, like specific VPN, LAMP stack.

Sometimes charging is through Azure, sometimes somehow separately.

Azure IoT

IoT Hub – a service (kind of Paas solution) where devices can connect:

  • Telemetry – devices sending metrics to cloud
  • Devices uploading files
  • Cloud sends to device (firmware update, command)

DeviceTwin – representation of the device in cloud. Application in the cloud interacts with this twin using SDK rather than physical device.

You need to write the code which does your solution.

IoT central – kind of SaaS solution, uses IoT Hub. Software out of the box, when you don’t want to write your own app and play with SDK, just use ready dashboards, similarity to logical apps.

  • Dashboards/apps ready to use
  • Device templates – specific to some common industries
  • Simulates devices (for testing)
  • Handles common industry scenarios (customizable)
  • Rules to be defined (signal -> condition -> actions (sms/email/webhook sending). Example: If fridge temp is too high, send an sms.

Azure Sphere – focus on end to end security. It’s certificate based, not password based. There are 3 components:

  • AzSphere Microcontroler Unit (MCU).
  • Linux based OS
  • AS3 – AzSphere Security Service (making sure there is no malicious software, …)

Big data and Analytics Services

ETL:

  • extract (get raw data from any source. DataLake – storage type for unstructured raw data, cheap.),
  • transform (cleaning, deduplicating, wrangling – transform into new format),
  • load (into a sink – SqlDb, CosmosDb) to do analysis.

Az Data Factory – does ETL (from raw data to needed format), orchestrates all steps.

HDInsight – open source analytics services available to you (examples), transforming:

  • Hadoop – dividing tasks into smaller parts, disk based.
  • Storm – real time processing
  • Spark – batch jobs, data transformations, memory based.
  • Kafka – big data streaming
  • Hive LLAP – query data live from the store
  • HBase – noSql storage

Az Data Bricks – solution build on Apache Spark.

Az Synapse Analytics – workspace to manage ETL, business intelligence, etc., complete analytics solution (?).

AI Services

Training a model using existing data.

Az Machine Learning – platform for predictions basic on ‘historical’ data provided before. Allows total control for data scientists. Deployed algorithm is exposed via API and an app can communicate with it.

Az Cognitive Services – prebuild models, easy to spin up, not much coding, not much knowledge needed. Available services:

  • language
  • speech
  • vision

Az Bot Service – kind of virtual agent to communicate with user. Need to build knowledge base.

Serverless Technologies

Consumption based. Event driven.

Az Functions – runs piece of code in supported stack (.Net, node.js, Java, Python). It’s stateless. Durable functions can have some state.

Az Logic Apps – no/low code (you don’t have your code to run). There are some connectors between actions. There is designer. There are connectors for i.e. outlook, ftp, sql server, Az storage. There are some templates for specific scenarios.

DevOps Technologies

Az DevOps components:

  • Repository – place to store the code, GIT
  • Boards – tracking project state, i.e. KanBan board (JIRA alternative)
  • Pipelines – CI (integrating, testing code)/CD (deploying code)
  • Artifact – compiled image

GitHub – better alternative to Az DevOps, MS bought GitHub and do not invest more in AzDevOps:

  • Repositories – rich features here, code analysis, etc.
  • Board – JIRA alternative
  • Actions – CI/CD and much more
  • Projects – boards, weak today, no artifacts yet.

Today you should probably mix GitHub with AzDevops (repo in GH, board in AzDevOps).

Az DevTest Labs – environment, for testing builds in, create and after test destroy.


Az Management Solutions

Interactions with AZ: Azure <- Az Resource Manager (ARM) <- tools <- user

Tools:

  • web portal
  • mobile app
  • Az Powershell Module (i.e. get-azvm)
  • Az CLI [bash] (i.e. az vm list –output table)

Az Advisor

Free service with recommendations (cost, security, reliability, performance, …). Alerts can be created. There is general score, how good your solutions are.

Az Resource Manager (ARM) Templates

Declarative (describes final required state of the resource) json template. Template can be deployed to Az (i.e. via CLI). It’s idempotent, so if you deploy many times the same template, the final state will be the same.

There is also BICEP format, human friendly TypeScript format, also declarative. It is transcompiled to json templates behind the scenes.

There is also TerraForm template (declarative templates for different clouds, not only Az).

Az Monitor

There are metrics in time-based database. There are metrics per resource.

Logs you can configure in Diagnostics Settings. You can store logs in Az Storage (just to store) or send them to EventHub (to pass to external analytics tool) or to Logs Analytics Workspace.

Alert Rules – you can use logs and metrics as an input. You can define Action Rules to react (send sms, email, webhook, logic app).

Az Service Health

In the portal it’s hidden in ‘Help + support’ menu item. It displays recently found issues. Planned maintenance, Security advisories, Health advisories. You can create health alerts.


General security and network security

Ms Defender for Cloud (previously: Az Security Center)

There is overall security score/chart. There are recommendations.

You can add compliance policies (not only use the standard Az one).

There are defender plans (additionally paid services) to enhance security.

Key Vault

An app may need a secret (password, token, …).

Key Vault supports:

  • secret (sth you can read and write, i.e. password)
  • key (generate or import, cannot get it out of KeyVault)
  • certificate (lifecycle management, distribution)

There is role based access to KeyVault secrets. You can have App Service resource which can have right to read some secrets from key vault.

Az Sentinel

Sentinel is a service to detect security issue and automatically react on them. Sentinel sits on top of Log Analytics Workspace. There are two subsystems:

  • SIEM – Security Information and Event Management. Detecting and looking into the problem. You can use many external connectors to collect information. There is a list of incidents.
  • SOAR – automatically responds to an event.

Az Dedicated Hosts

It’s an concept for isolation, when you want to avoid VMs of other companies. You can create host group and put in there dedicated hosts. In host you can create virtual machines. Hosts have different types. Having dedicated host you can control a little maintenance window. On Dedicated Host you can place many VMs (big or small) but all of them must be of the same SKU.

Defense in Depth

There are security layers (even if one is compromised, the next still works). You need to care about specific layers depending on your solution (IAAS, PAAS, SAAS, cloud, on premise). There is Ms Defender to help with security.

Security layers:

  • data
  • application
  • compute (i.e. vm)
  • network (allow only specific IP)
  • perimeter (?)
  • identity + access
  • physical security

Security aspects (CIA):

  • confidentiality
  • integrity
  • availability

Zero Trust

Zero trust principles:

  • Verify explicitly (revalidate every request whether it’s allowed)
  • Least privilege (the least right as possible)
  • Assume breach (we assume incoming signals are corrupted)

Components:

  • identity (passwordless, single sign on, 2FA)
  • endpoint (users laptop, IoT device, equipment)
  • network (where request is coming from, encrypting communication
  • context (ctx of the request, understand overal risk of the request, conditional access)

Network Security Groups NSG

VNet exists in specific region (cannot span). VNet can be divided into subnets.

NSG must be in the same subscription as VNET. It has following properties: rules, names, priority, IP port, protocol, action (allow/deny).

Thanks to NSG you can define granular rules about network traffic (what to allow). By default all inbound traffic from the Internet is denied. Kind of firewall rules. NSG has to be linked to subnet where these rules are applied. So you can restrict traffic between subnets or peer network or from on premise network. NSG understands TCP/IP layer.

Az firewall

Az firewall is deployed in it’s private subnet (AzureFirewallSubnet). Traffic is going through this firewall. You can set application rules (urls) and network rules (ip addresses). Az firewall scales automatically to handle whole traffic.

Az DDoS Protection

DDoS types of attacks:

  • volumetric (so high traffic)
  • protocol (malformed packages, ping of death)
  • application (high traffic to slow down reads/writes, …)

Protections:

  • basic DDos Protection (huge scale attacks, you don’t have insights here). It’s free.
  • standard DDoS Plan – you can create it and link it to you VNets. It’s paid. There is rich reporting, metrics, adaptive tuning, rapid response, credit (scaling when attach can not be handled). You can use plan only for resources living in a VNet.

Azure Identity

Authentication and Authorization

Authentication (AuthN) – proving that user/service really is who they say they are. Ways (what we know, what we are, what we have). I.e. protocols: OIDC, SAML, WS-FED.

Authorization (AuthZ) – defines what user/service is allowed to do (roles). I.e. protocols: OAuth2.0

Each resource (i.e. VM) has Access Control and in it 3 generic roles (Owner – can do everything; Contributor – like owner, but cannot change privileges; Reader – cannot change anything).

Az Active Directory (AD)

IdP – Identity provider for the cloud. There is no group policy, hierarchy in the cloud.

Different apps/services just trust Az AD (office365, …). There can be single-sign-on experience.

AAD Connect – synchronizes roles from your local, on-premises AD to Azure AD.

Conditional Access, MFA, SSO

Conditional access – it’s through policies, conditions to be met by user/device. (Az AD -> Security -> Conditional access in portal menu). Then you can grant/block access.

MFA – multi factor authentication (two or more of these: we know/ we are/ we have). By default you have access to MFA service through AuthApp. If you buy premium plan, you can configure it an use other ways to MFA.

SSO – single sign on – I authenticated once to AzAD. When I access other apps, I don’t get prompted again. Other apps trust in AzAD.

Role Based Access Control RBAC

Role assignment – for certain identity I have certain role (group of actions/permissions) being assigned to certain scope (management group, subscription, resource group, resource). Role assignments are inherited, so if you set role on subscription, all resources in it will inherit it.

Generic roles:

  • owner – can do everything
  • contributor – can do everything, but cannot grant access
  • reader – can see, but not edit

Resource Locks

Resource lock can be applied on different levels (resource, resource group, subscription). It’s inherited as role assignments. To apply resource lock you need to be the owner of resource or administrator. To delete a resource, you need to remove lock first and only then you could delete it.

Lock types:

  • cannot delete – you can modify, but cannot delete
  • read only – you cannot modify, nor modify

Tags

Tag is a metadata (key,value pair) can be applied to subscription, resource group, resource. It’s not inherited. But you could configure policy, to enforce copying tags while creating resource. You can use them for any reason, for example to mark business unit, budget, project, environment, os, … Later you can filter resources by tags.

Az Policy

Policy let’s define thing I need (rules + actions). Can be applied to different levels (management group, subs, …), it’s inherited. Initiative is a set of policies. Policy defines my guard rails. Policy – there is some condition, if it’s met, use an effect.

Role – means who can do something. Policy – means what can be done. Initiative – group of policies

Effects:

  • audit (do not deny if condition is not met, but calculate percentage of conditions are not met, for further information)
  • deny (deny action if not met conditions)
  • append/modify (i.e. add tag)
  • ifNotExist

For example you can define which sku’s (condition) can be used while creating storage resource and if somebody uses different one (condition not met), just deny creation (effect). You could assign such policy to subscription.

Governance hierarchy constructs

Management hierarchy. There is top/ root management group. Here starts hierarchy or management groups. Up to 6 levels of such hierarchy. Below these groups you can have subscriptions. In subscriptions there can be resource groups with resources. You cannot nest subscriptions in subscriptions (the same about resource groups).

Mngmnt group – is any group you create for organizational purposes, which help you organize all resources (i.e. prod, test, poland, europe, HrDepartment, …).

With these 3 constructs (mngmnt group, res group, subscription) you can do common things:

  • use roles based access control, assign to scope (who can do certain actions)
  • assign policy to scope (what actions can be done)
  • set budged – $ or %, actual or forecast (how much can it spend)
  • use inheritance (top most scope has loose settings and each group down in the hierarchy more and more strict)

Az Blueprints

While creating Resource Group we should have following artifacts:

  • arm template
  • RBAC
  • Policy

We can define a blueprint (kind of definition) from these items and assign this blueprint to subscription.

It can be defined whether during assignment something can be changed or it’s locked.

It can say that: assign some rights, add some tags, do not lock, etc.

Cloud Adoption Framework

Set of best practices when working with cloud. It’s a ms webpage, help with getting started.

Privacy and compliance resources

Important documents:

  • Ms Privacy Statement – how ms collect and uses data.
  • Online Services Term – key agreement between Ms and customer.
  • Data Protection Addendum – data security, data retention and deletion.

Trust Center – hub to get information about: security, privacy, compliance.

Sovereign Regions – there are a few totally separated Azure clouds (main, US gov, china, germany).

Cost management and SLA

Factors that affect cost

There are different types of resources and different costs. There are different SKUs, tiers, location also influences cost.

Meters:

  • exists (if load balancer/ public IP exists – you are paying)
  • running (if VM exists – you are paying)
  • instances (how many, autoscaling)
  • work (i.e. serverless (az functions, az logic apps. you pay when work is being done)
  • storage (db; on data used or data provisioned)
  • interactions (transactions against resource)
  • licences (windows licenses, oracle licenses)

Factors to reduce cost

How to save money:

  • Use autoscale to use as less instances as possible.
  • Use serverless to pay only for work done.
  • Pick correct SKU (cpu/memory)
  • Deallocate (VM have autoshutdown option)
  • Delete resources when not required
  • Choose correct tier, i.e. regarding storage pick one of hot/cool/archive
  • Tag VM with owner/ project to know who is responsible for specific resource, to know whether you can delete it.
  • Use Az Advisor – there are cost optimization ideas.
  • Az Reservation – is a commitment that you will use in next 1-3 years specific amount of resources. It gives you huge discount (30-60%), but even if you use less, you pay the specific money.
  • Use already bought licenses (i.e. Windows, SQL server) in the cloud, it’s called ‘Az Hybrid Benefit‘.
  • Az Spare Capacity – you can use this if your job can be stopped and resumed later. You can use cheap capacity then up to the moment when it’s price rises (because of the demand). While creating this you specify max price you want to pay. Cost can depend on region.

Pricing and TCO Calculators

There is pricing calculator where you can pick any services you plan to use (i.e. VM, disks, licenses) and it will calculate total cost. It just helps to estimate cost.

TCO – total cost of ownership. It estimates how much it costs to have on-premises environment and how much could you save when moving into cloud.

Az Cost Management

Cost analysis – shows how much money you have spent so far and a prognosis for selected subscription. Shows which types of services cost you the most. There is a nice flexible chart.

Budgets – let you define a number ($) and fire an action when this value (money) is spent or forecast reaches it. For example send an email if you reach 200$ per month.

Az Service Level Agreement

SLA – is the commitment from cloud provider regarding availability.

SLA levels:

  • 99 – it’s 99% availability (it means 1,68h downtime possible per week)
  • 99,9 – 10,1 minutes downtime (for VM to reach this level you need to have just one VM instance, but with premium SSD disk or Ultra Disk).
  • 99,95 – 5 minutes (for VM to reach this level you need to have at least two VMs deployed in the same Availability Set)
  • 99,99 – 1,01 minute downtime (for VM to reach this level you need to have at least two VMs deployed across at least two Availability Zones in the same region)
  • 99,999 – 6 seconds downtime a week

There is Az Status board where you can take a look whether service was affected (http://status.azure.com/status).

Composite SLA – situation where you have two separate resources and for each of them you have SLA 99,9 so composite SLA will be lower, because both services can be down in different point of time.

If service is free, there is no SLA.

Az Service Lifecycle

Lifecycle stages (when new feature is added to Az):

  • Internal
  • Private Preview – limited set of customers
  • Public Preview – open
  • Generally Available (GA), have SLA support

There is Azure blog with info about new features you could use/ preview and report feedback.

_https://www.youtube.com/watch?v=pY0LnKiDwRA&list=PLlVtbbG169nED0_vMEniWBQjSoxTsBYS3 62

_https://www.youtube.com/watch?v=tQp1YkB2Tgs 2:45:00