## Terraform AWS Cloud Control integrated.
https://www.infoq.com/news/2024/06/hashicorp-aws-cloud-control/

AWS Cloud Control Terraform Provider Enables Quicker Access to AWS Features
Jun 03, 2024 2 min read
HashiCorp has moved the AWS Cloud Control (AWSCC) provider to general availability. The AWSCC provider is automatically generated based on the Cloud Control API published by AWS implying that new AWS features can be supported in Terraform upon their release. Originally released in 2021 as a tech preview, the move to version 1.0 includes several new features including sample configurations and improved schema-level documentation.

AWSCC is built on top of the AWS Cloud Control API. The Cloud Control API provides CRUDL (create, read, update, delete, and list) operations to use with AWS cloud resources. Any resource type published to the CloudFormation Public Registry has a standard JSON schema that can be used with this API.

As part of this release, there are now over 270 resources with sample configurations. For example, awscc_ec2_key_pair allows for specifying a key pair to use with an EC2 instance. An existing key pair can be specified in the PublicKeyMaterial property; omitting that property will generate a new key pair.

resource "awscc_ec2_key_pair" "example" {
  key_name            = "example"
  public_key_material = ""

  tags = [{
    key   = "Modified By"
    value = "AWSCC"
  }]
}

In addition, more than 75 resources now have improved attribute-level documentation. The resources have detailed descriptions of how to use the attributes within the resource-accepted values. This includes context about the attribute, how it's used, and the expected values for each attribute.

The AWSCC is not meant as a replacement for the standard AWS provider. As noted by Aurora Chun, Product Marketing Manager at HashiCorp, "using the AWSCC and AWS providers together equips developers with a large catalog of resources across established and new AWS services." The providers can be used in conjunction to provision resources:

# Use the AWS provider to provision an S3 bucket
resource "aws_s3_bucket" "example" {
  bucket_prefix = "example"
}
 
# Use the AWSCC provider to provision an Amazon Personalize dataset
resource "awscc_personalize_dataset" "interactions" {
  ...
 
  dataset_import_job = {
    data_source = {
      data_location = aws_s3_bucket.interactions_import.bucket
    }
  }
}

The AWSCC provider is generated from the latest CloudFormation schemas and releases weekly with all new services added to the Cloud Control API. There are some resources in the CloudFormation schema that are not compatible with the AWSCC provider. A full list of these can be found on GitHub.

Within Azure, the AzAPI Provider enables similar support for the Azure ARM (Azure Resource Management) REST APIs. While there isn't a Terraform provider available for it, CloudGraph provides a similar API experience to AWS Cloud Control. CloudGraph has support for AWS, Azure, GCP, and Kubernetes.

The Terraform AWS Cloud Control provider is available for download now from the Terraform Registry. The AWSCC provider requires Terraform CLI version 1.0.7 or higher. The source code for the provider is available on GitHub and is licensed under the MPL-2.0 license. Additional information can be found within the provider document and the tutorial.


## Traefik 3.0 Reverse Proxy Rolls Out With Major Enhancements
https://linuxiac.com/traefik-3-0-reverse-proxy/ 

Traefik 3.0, a cloud-native HTTP reverse proxy and load balancer, brings stable HTTP/3 support, OpenTelemetry & Wasm integration, and more.

Yes, there are easier-to-use solutions in the world of reverse proxies, like Nginx Proxy Manager or Caddy, for example. However, when we talk about an enterprise that is tightly integrated with the needs of DevOps and Kubernetes professionals, Traefik is the name that comes out on top.


Traefik 3.0 also extends its observability features, incorporating OpenTelemetry to provide state-of-the-art tooling for metrics and tracing, supporting a seamless transition from older systems like OpenCensus and OpenTracing.


the new release also brings several Kubernetes-related updates, including support for cross-namespace references in Gateway API and the ability to handle middleware in filters for better traffic management. Other Kubernetes enhancements include the addition of the Gateway status address and the removal of deprecated APIs.

Traefik compares to Envoy.

## DevOps: How Container Networking Works: a Docker Bridge Network From Scratchhttps://labs.iximiuz.com/tutorials/container-networking-from-scratch 

## DevOps Git 101
https://www.youtube.com/watch?v=aolI_Rz0ZqYhttps://gitbutler.com/


## /bin/pash parallel shells
Data Parallel Shell scripting
https://github.com/binpash

PaSh aims at the correct and automated parallelization of POSIX shell scripts. Broadly, PaSh includes three components: (1) a compiler that, given as input a POSIX shell script, emits a POSIX shell script that includes explicit data-parallel fragments for which PaSh has deemed such parallelization semantics-preserving, (2) a set of PaSh-related runtime primitives for supporting the execution of the parallel script fragments, available as in the PATH as normal commands, and (3) a crowd-sourced library of annotations characterizing several properties of common Unix/Linux commands relevant to parallelization.


#  1000+ DevOps Bash Scripts (AWS, GCP, Kubernetes, ...) [[{101?]]

* AWS, GCP, Kubernetes, Docker, CI/CD, APIs, SQL, PostgreSQL, MySQL, 
  Hive, Impala, Kafka, Hadoop, Jenkins, GitHub, GitLab, BitBucket, 
  Azure DevOps, TeamCity, Spotify, MP3, LDAP, Code/Build Linting, pkg 
  mgmt for Linux, Mac, Python, Perl, Ruby, NodeJS, Golang, Advanced 
  dotfiles: .bashrc, .vimrc, .gitconfig, .screenrc, tmux..

* <https://github.com/HariSekhon/DevOps-Bash-tools>
_____________________________________



# spell-checking.sh
cat f1.md f2.md | 
  tr A-Z a-z |
  tr -cs A-Za-z '\n' |
  sort |
  uniq | 
  comm -13 dict.txt - > out
cat out | wc -l | sed 's/$/ mispelled words!/'

Ej:

$                                                                           ./demo-spell.sh # no parallel

$ $PASH_TOP/pa.sh -w 2 -d 1 --log_file pash.log demo-spell.sh # 2x parallelism

## DevOps Git: merging at scale:
* merge queue at Github to ship hundreds of changes every day
* <https://github.blog/2024-03-06-how-github-uses-merge-queue-to-ship-hundreds-of-changes-every-day/>


## Enhancing Istio Operations with Kong Istio Gateway
* <https://thecloudblog.net/post/enhancing-istio-operations-with-kong-istio-gateway/>


## DevOps Grafana Loki 3.0 Released with Native OpenTelemetry Support
* <https://linuxiac.com/grafana-loki-3-0-released-with-native-opentelemetry-support/>

Grafana Loki 3.0 Released with Native OpenTelemetry Support


OpenTelemetry, a set of tools, APIs, and SDKs used to collect, analyze, and export telemetry data from software applications and services. This improves the log ingestion and querying experience.



## Container Networking [[{containerization.networking.101]]
* <https://jvns.ca/blog/2016/12/22/container-networking/> By Julia Evans

> """ There are a lot of different ways you can network containers
> together, and the documentation on the internet about how it works is
> often pretty bad. I got really confused about all of this, so I'm
> going to try to explain what it all is in laymen's terms. """
> ...  *what even is container networking?*
> .. you have two main options for running apps:
> 1. run app in host network namespace. (normal networking)
>   "host_ip":"app_port"
> 2. run the program in its own *network namespace*:
>    It turns out that this problem of how to connect two programs in 
>    containers together has a ton of different solutions. [[{doc_has.keypoint}]]

1. "every container gets an IP".  (k8s requirement)
   ```
   | "172.16.0.1:8080" // Tomcat continer app 1
   | "172.16.0.2:5432" // PostgreSQL container app1
   | "172.17.0.1:8080" // Tomcat continer app 2
   | ...
   | └───────┬───────┘
   | any other program in the cluster will target those IP:port
   | Instead of single-IP:"many ports" we have "many IPs":"some ports"
   ```
   Q: How to get many IPs in a single host?
   - Host IP: 172.9.9.9
   - Container private IP: 10.4.4.4
   - To route from 10.4.4.4 to 172.9.9.9:
   1. Alt1: Configure Linux routing tables
      ```
      | $ sudo ip route add 10.4.4.0/24 via 172.23.1.1 dev eth0
      ```
   2. Alt2: Use AWS VPC Route tables
   3. Alt3: Use Azure ...

2. Encapsulating to other networks:
   ```
   | LOCAL NETWORK     REMOTE NETWORK
   |                   (encapsulation)
   | IP: 10.4.4.4      IP: 172.9.9.9
   | TCP stuff         (extra wrapper stuff)
   | HTTP stuff        IP: 10.4.4.4
   |                   TCP stuff
   |                   HTTP stuff
   ```
   2 different ways of doing encapsulation:
     1. "ip-in-ip": add extra IP-header on top "current" IP header.
     ```
     | MAC:  11:11:11:11:11:11
     | IP: 172.9.9.9
     | IP: 10.4.4.4
     | TCP stuff
     | HTTP stuff
     | Ex:
     | $ sudo ip tunnel add mytun mode ipip \       <·· Create tunnel "mytun"
     |    remote 172.9.9.9 local 10.4.4.4 ttl 255   
     |    sudo ifconfig mytun 10.42.1.1             
     | $ sudo route add -net 10.42.2.0/24 dev mytun <·· set up a route table
     | $ sudo route list
     ```
     2. "vxlan": take whole packet (including the MAC address) and wrap
     it inside a UDP packet. Ex:
     ```
     | MAC address: 11:11:11:11:11:11
     | IP: 172.9.9.9
     | UDP port 8472 (the "vxlan port")
     | MAC address: ab:cd:ef:12:34:56
     | IP: 10.4.4.4
     | TCP port 80
     | HTTP stuff
     ```

* Every container networking "thing" runs some kind of daemon program 
  on every box which is in charge of adding routes to the route table.
  for automatic route configuration. Alternatives include [[{doc_has.keypoint}]]
  1. Alt1: routes are in etcd cluster, and program talks to the
     etcd cluster to figure out which routes to set.
  2. Alt2: use BGP protocol to gossip to each other about routes,
     and a daemon (BIRD) that listens for BGP messages on
     every box.

* Q: How does that packet actually end up getting to your container program?
  1. bridge networking
     1. Docker/... creates fake (virtual) network interfaces for every
       single one of your containers with a given IP address.
     2. The fake interfaces are bridges to a real one.
  2. Flannel:
     - Supports vxlan (encapsulate all packets) and
       host-gw (just set route table entries, no encapsulation)
     - The daemon that sets the routes gets them *from an etcd cluster*.
  3. Calico:
     - Supports ip-in-ip encapsulation and
       "regular" mode, (just set route table entries, no encaps.)
     - The daemon that sets the routes gets them *using BGP messages*
       from other hosts. (etcd is  not used for distributing routes).
[[containerization.networking.101}]]

## Testcontainers: [[{qa.testing,dev_language.java,qa,PM.TODO]
* <https://www.testcontainers.org/#who-is-using-testcontainers>
* Testcontainers is a Java library that supports JUnit tests,
  providing lightweight, throwaway instances of common databases,
  Selenium web browsers, or anything else that can run in a Docker
  container.

- Testcontainers make the following kinds of tests easier:

  - Data access layer integration tests: use a containerized instance
    of a MySQL, PostgreSQL or Oracle database to test your data access
    layer code for complete compatibility, but without requiring complex
    setup on developers' machines and safe in the knowledge that your
    tests will always start with a known DB state. Any other database
    type that can be containerized can also be used.
  - Application integration tests: for running your application in a
    short-lived test mode with dependencies, such as databases, message
    queues or web servers.
  - UI/Acceptance tests: use containerized web browsers, compatible
    with Selenium, for conducting automated UI tests. Each test can get a
    fresh instance of the browser, with no browser state, plugin
    variations or automated browser upgrades to worry about. And you get
    a video recording of each test session, or just each session where
    tests failed.
  - Much more!
    Testing Modules
    - Databases
      JDBC, R2DBC, Cassandra, CockroachDB, Couchbase, Clickhouse,
      DB2, Dynalite, InfluxDB, MariaDB, MongoDB, MS SQL Server, MySQL,
      Neo4j, Oracle-XE, OrientDB, Postgres, Presto
    - Docker Compose Module
    - Elasticsearch container
    - Kafka Containers
    - Localstack Module
    - Mockserver Module
    - Nginx Module
    - Apache Pulsar Module
    - RabbitMQ Module
    - Solr Container
    - Toxiproxy Module
    - Hashicorp Vault Module
    - Webdriver Containers


Who is using Testcontainers?
-   ZeroTurnaround - Testing of the Java Agents, micro-services, Selenium browser automation
-   Zipkin - MySQL and Cassandra testing
-   Apache Gora - CouchDB testing
-   Apache James - LDAP and Cassandra integration testing
-   StreamSets - LDAP, MySQL Vault, MongoDB, Redis integration testing
-   Playtika - Kafka, Couchbase, MariaDB, Redis, Neo4j, Aerospike, MemSQL
-   JetBrains - Testing of the TeamCity plugin for HashiCorp Vault
-   Plumbr - Integration testing of data processing pipeline micro-services
-   Streamlio - Integration and Chaos Testing of our fast data platform based on Apache Puslar, Apache Bookeeper and Apache Heron.
-   Spring Session - Redis, PostgreSQL, MySQL and MariaDB integration testing
-   Apache Camel - Testing Camel against native services such as Consul, Etcd and so on
-   Infinispan - Testing the Infinispan Server as well as integration tests with databases, LDAP and KeyCloak
-   Instana - Testing agents and stream processing backends
-   eBay Marketing - Testing for MySQL, Cassandra, Redis, Couchbase, Kafka, etc.
-   Skyscanner - Integration testing against HTTP service mocks and various data stores
-   Neo4j-OGM - Testing new, reactive client implementations
-   Lightbend - Testing Alpakka Kafka and support in Alpakka Kafka Testkit
-   Zalando SE - Testing core business services
-   Europace AG - Integration testing for databases and micro services
-   Micronaut Data - Testing of Micronaut Data JDBC, a database access toolkit
-   Vert.x SQL Client - Testing with PostgreSQL, MySQL, MariaDB, SQL Server, etc.
-   JHipster - Couchbase and Cassandra integration testing
-   wescale - Integration testing against HTTP service mocks and various data stores
-   Marquez - PostgreSQL integration testing
-   Transferwise - Integration testing for different RDBMS, kafka and micro services
-   XWiki - Testing XWiki under all supported configurations
-   Apache SkyWalking - End-to-end testing of the Apache SkyWalking,
    and plugin tests of its subproject, Apache SkyWalking Python, and of
    its eco-system built by the community, like SkyAPM NodeJS Agent
-   jOOQ - Integration testing all of jOOQ with a variety of RDBMS
[[}]]

## docker-compose: dev vs pro  [[{]
  https://stackoverflow.com/questions/60604539/how-to-use-docker-in-the-development-phase-of-a-devops-life-cycle/60780840#60780840
  Modify your Compose file for production
[[}]]

## CRIU.org: Container Live Migration : [[{]
<https://criu.org/Main_Page>

CRIU: project to implement checkpoint/restore functionality for Linux.

Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA:
/krɪʊ/, Russian: криу), is a Linux software. It can freeze a
running container (or an individual application) and checkpoint its
state to disk. The data saved can be used to restore the application

Used for example to bootstrap JVMs in millisecs (vs secs)     [[performance,dev_stack.java]]
</JAVA/java_map.html#?jvm_app_checkpoint>
and run it exactly as it was during the time of the freeze. Using
this functionality, application or container live migration,
snapshots, remote debugging, and many other things are now possible.
[[}]]

## ContainerCoreInterceptor:  [[{troubleshooting,PM.TODO]
https://github.com/AmadeusITGroup/ContainerCoreInterceptor 
Core_interceptor can be used to handle core dumps in a dockerized environment.
It listens on the local docker daemon socket for events. When it
receives a die event it checks if the dead container produced any
core dump or java heap dump.
[[}]]

# KVM Kata containers:[[{PM.TODO]
<https://katacontainers.io/>
- Security: Runs in a dedicated kernel, providing isolation of
  network, I/O and memory and can utilize hardware-enforced isolation
  with virtualization VT extensions.
- Compatibility: Supports industry standards including OCI container
  format, Kubernetes CRI interface, as well as legacy virtualization
  technologies.
- Performance: Delivers consistent performance as standard Linux
  containers; increased isolation without the performance tax of
  standard virtual machines.
- Simplicity: Eliminates the requirement for nesting containers
  inside full blown virtual machines; standard interfaces make it easy
  to plug in and get started.
[[}]]

## avoid "sudo" docker [[{containerization.docker]]
$* $ sudo usermod -a -G docker "myUser"*
$ newgrp docker  (take new group without re-login)
[[}]]

## https://github.com/dbohdan/structured-text-tools/blob/master/sql-based.md
  https://github.com/dbohdan/structured-text-tools#sql-based-tools

## GraphDash: web-based dashboard built on graphs and their metadata. [[{]]
https://github.com/AmadeusITGroup/GraphDash
[[}]]

## Use libguestfs to manage virtual machine disk images [[{]]
  https://www.redhat.com/sysadmin/libguestfs-manage-vm
[[}]]

## workflow-cps-global-lib-http-plugin  [[{jenkins.jenkinsfile]]
https://github.com/AmadeusITGroup/workflow-cps-global-lib-http-plugin
  retrieve shared libraries through a SCM, such as Git.
  The goal of this plugin is to provide another way to retrieve
  shared libraries via the @Library declaration in a Jenkinsfile.
  This is a way to separate to concerns : source code (SCM) and built
  artefacts (binaries). Built artefacts are immutable, tagged and often
  stored on a different kind of infrastructure. Since pipelines can be
  used to make production loads, it makes sense to host the libraries
  on a server with a production-level SLA for example. You can also
  make sure that your artefact repository is close to your pipelines
  and share the same SLA. Having your Jenkins and your artefact
  repository close limitsr latency and limits network issues.
[[}]]


## GIT: Part 3: Context from commits  [[{]]
  https://alexwlchan.net/a-plumbers-guide-to-git/3-context-from-commits/ 
[[}]]

## https://github.blog/author/dstolee/  [[{git,scalability]]
  Git’s database internals V: scalability
  This fifth and final part of our blog series exploring Git's
  internals shows several strategies for scaling your Git repositories
  that match related database sharding techniques.
[[}]]

## 4 lines of code to improve your Ansible play [[{ansible,qa.billion_dollar_mistake]]
  With a tiny bit of effort, you can help the next person by not just
  mapping the safe path but leaving warnings about the dangers
[[}]]

## https://docs.docker.com/network/bridge/

## Gerrit: Git server with voting mechanism [[{]]
https://docs.google.com/presentation/d/1C73UgQdzZDw0gzpaEqIC6SPujZJhqamyqO1XOHjH-uk/edit#slide=id.g4d6c16487b_1_844
- Submit Type / Submit Strategy:
  - FAST_FORWARD_ONLY:
    Submit fails if fast-forward is not possible.
  - MERGE_IF_NECESSARY:
    If fast-forward is not possible, a merge commit is created.
  - REBASE_IF_NECESSARY:
    If fast-forward is not possible, the current patch set is automatically
    rebased (creates a new patch set which is submitted).
  - MERGE_ALWAYS:
    A merge commit is always created, even if fast-forward is possible.
  - REBASE_ALWAYS:
    The current patch set is always rebased, even if fast-forward is possible.
    For all rebased commits some additional footers will be added (Reviewed-On, Reviewed-By, Tested-By).
  - CHERRY_PICK:
    The change is cherry-picked. This ignores change dependencies.
    For all cherry-picked commits some additional footers will be added
    (Reviewed-On, Reviewed-By, Tested-By).
  - ALLOW CONTENT MERGES:
    whether Gerrit should do a content merge if the same files have been touched
[[}]]

## Building rootless containers for JavaScript front ends  [[{containerization.security.101]]
https://developers.redhat.com/blog/2021/03/04/building-rootless-containers-for-javascript-front-ends/?sc_cid=7013a000002vsMVAAY
[[}]]

## Git: What's new 2.31
  https://github.blog/2021-03-15-highlights-from-git-2-31/

## https://space.sh,  Server apps and automation in a nutshell   [[{PM.low_code]]
   very, very non-intrusive. If you want to manage servers
   remotely Space would SSH into those servers to run your tasks and
   never upload anything to the server, nor have any dependencies other
   than a POSIX shell (ash/dash/bash 3.4+).
   - Used as the base from simplenetes.
[[}]]





## dolt: Git + SQL!!!
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-add.md
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt.md
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-blame.md
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-branch.md
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-checkout.md
  https://github.com/tldr-pages/tldr/blob/master/pages/common/dolt-commit.md
  - Dolt is a SQL database that you can fork, clone, branch, merge, push
    and pull just like a git repository. Connect to Dolt just like any
    MySQL database to run queries or update the data using SQL commands.
    Use the command line interface to import CSV files, commit your
    changes, push them to a remote, or merge your teammate's changes.

## Fetch gitignore boilerplates.
  https://github.com/tldr-pages/tldr/blob/master/pages/common/gibo.md
  More info: https://github.com/simonwhitaker/gibo.

## https://github.com/tldr-pages/tldr/tree/master/pages/common/git-*.md
  git-stage              git-am.md                git-annex.md          git-annotate.md      git-annotate
  git-apply.md           git-apply                git-archive.md        git-archive          git-bisect.md
  git-blame.md           git-branch.md            git-bugreport.md      git-bugreport
  git-bundle.md          git-bundle               git-cat-file.md       git-check-attr.md
  git-check-attr         git-check-ignore.md      git-check-mailmap.md
  git-check-mailmap      git-check-ref-format.md  git-check-ref-format
  git-checkout-index.md  git-checkout-index       git-checkout.md
  git-cherry-pick.md     git-cherry-pick          git-cherry.md         git-cherry
  git-clean.md           git-clone.md             git-clone             git-column.md        git-column
  git-commit-graph.md    git-commit-graph         git-commit-tree.md    git-commit.md
  git-commit             git-config.md            git-count-objects.md  git-count-objects
  git-credential.md      git-credential           git-describe.md       git-diff.md          git-diff
  git-difftool.md        git-difftool             git-fetch.md          git-flow.md
  git-for-each-repo.md   git-for-each-repo        git-format-patch.md
  git-fsck.md            git-gc.md                git-grep.md           git-grep             git-help.md        git-ignore.md
  git-imerge.md          git-init.md              git-instaweb.md       git-lfs.md           git-log.md
  git-ls-files.md        git-ls-files             git-ls-remote.md      git-ls-remote
  git-ls-tree.md         git-maintenance.md       git-maintenance       git-merge.md
  git-merge              git-mergetool.md         git-mergetool         git-mv.md            git-notes.md
  git-pr.md              git-pr                   git-prune.md          git-pull.md          git-push.md        git-rebase.md
  git-reflog.md          git-reflog               git-remote.md         git-remote           git-repack.md
  git-replace.md         git-request-pull.md      git-reset.md          git-restore.md
  git-restore            git-rev-list.md          git-rev-parse.md      git-revert.md        git-rm.md
  git-send-email.md      git-shortlog.md          git-show-branch.md    git-show-branch
  git-show-ref.md        git-show-ref             git-show.md           git-sizer.md         git-stage.md
  git-stage              git-stash.md             git-stash             git-status.md        git-stripspace.md
  git-stripspace         git-submodule.md         git-subtree.md        git-svn.md
  git-switch.md          git-switch               git-tag.md            git-update-index.md
  git-update-index       git-update-ref.md        git-var.md            git-var              git-worktree.md
  git.md

## Analyze nginx configuration files.
  https://github.com/yandex/gixy.
  https://github.com/tldr-pages/tldr/blob/master/pages/common/gixy.md

## Render markdown on terminal.
https://github.com/charmbracelet/glow

## Gnomon: pipeline utility to prepend timestamp information from STDOUT.  [[{101]]
  https://github.com/paypal/gnomon
  https://github.com/tldr-pages/tldr/blob/master/pages/common/gnomon.md

  Useful for long-running processes where you'd like a historical
  record of what's taking so long. [[}]]

## gource:  [[{]]
  Renders an animated tree diagram of Git, SVN, Mercurial and Bazaar
  https://gource.io/
  Shows files and directories being created, modified
[[}]]


## Molecule helps testing ansible roles.
  https://github.com/tldr-pages/tldr/blob/master/pages/common/molecule.md
  More information: https://molecule.readthedocs.io.

## Git-big: cli extension for managing Write Once Read Many (WORM) files.  [[{git,scalability]]
  https://github.com/vertexai/git-big

  $ git big init                  ← Init repo

  $ git big add bigfile.iso       ← Add big file, sha256 hash generated&recorded in the index
  $ git big status
  → ...
  →[ W C   ] 993328d6 bigfile.iso
     | | └-- Depot              KO
     | └---- Cache              OK
     └------ Workding dir       OK
  $ cat .gitbig
  {
      "files": { "bigfile.iso": "e99f32a..." },
      "version": 1
  }
  $ ls -l bigfile.iso             ← original big file is now a symlink
  ... bigfile.iso -> .gitbig-anchors/99/33/e99f32a...
                     └─────────────┬────────────────┘
                       Final file is read-only
  $ git big push                  ←*Push pending big files to depos*


  # We can see the big file has been archived in the depot
  $ git big status
  → ...
  → [ W C D ] 993328d6 bigfile.iso
      | | └-- Depo (Remote repo) OK
      | └---- Cache              OK
      └------ Workding dir       OK

  $ git commit -m "Add bigfile.iso"       ←*Commit changes*
    ...
  $ git push origin master                ← push upstream


  In another machine:
  $ git clone  ...
  $ cd repo

  $ git big status

  → [     D ] 993328d6 bigfile.iso
      | | └-- Depo (Remote repo) OK  ← Only in depot after clone
      | └---- Cache              KO
      └------ Workding dir       KO

  $ git big pull
  Pulling object: e99f32a...
  $ ls -l $(readlink bigfile.iso)
  -r--r--r-- ... .gitbig-anchors/99/33/e99f32a...

  $ git big status
  ...
  → [ W C D ] 993328d6 bigfile.iso
      | | └-- Depo (Remote repo) OK  ← Only in depot after clone
      | └---- Cache              KO
      └------ Workding dir       KO
[[}]]


## set -o pipefail   bash flag  <[{bash,qa.error_control>]
  https://linuxtect.com/make-bash-shell-safe-with-set-euxo-pipefail/
...By default, if a commands in a pipe fails,  pipe continues to
   execute,  makes it fail instead.
[[}]]

## volatile overlay mounts and containers: [[{containerization]]
  https://www.redhat.com/sysadmin/container-volatile-overlay-mounts
  Recent versions of Podman, Buildah, and CRI-O have started to take
  advantage of a new kernel feature, volatile overlay mounts. This
  feature allows you to mount an overlay file system with a flag that
  tells it not to sync to the disk.

  https://sysadmin.prod.acquia-sites.com/sysadmin/overlay-mounts

  Speed up container builds with overlay mounts
  How Podman can speed up builds for multiple distributions by sharing the host's metadata.
  Overlay mounts help to address a challenge we run into when we have
  several containers on a single host. The basic problem is that every
  time you run dnf or yum inside a container, the container downloads
  and processes the metadata of all the repositories. To address this,
  we added an advanced volume mount to allow all of the containers to
  share the host's metadata. This approach avoids repeating the
  download and processing for each container. I previously wrote a blog
  post introducing the concept of overlay mounts inside of builds.
[[}]]

## Git merge strategies:                     [[{]]
  https://git-scm.com/docs/merge-strategies
  resolve: This can only resolve current-branch and another pulled-branch
  recursive:
  ours:
  theirs:
  patience:
  diff-algorithm=[patience|minimal|histogram|myers]
      ignore-space-change
      ignore-all-space
      ignore-space-at-eol
      ignore-cr-at-eol

  renormalize
  no-renormalize
  no-renames
  find-renames[=n]
  rename-threshold=n
  subtree[=path]
  octopus
  subtree

- git merge strategies: (From Bitbucket UI)
  - Merge commit --no-ff
     Always create a new merge commit and update the target branch to it,
    even if the source branch is already up to date with the target
    branch.
  - Fast-forward --ff
     If the source branch is out of date with the target branch, create a
    merge commit. Otherwise, update the target branch to the latest
    commit on the source branch.
  - Fast-forward only --ff-only
     If the source branch is out of date with the target branch, reject
    the merge request. Otherwise, update the target branch to the latest
    commit on the source branch.

  - Rebase and merge,  rebase + merge --no-ff
      Rebase commits from the source branch onto the target branch,
    creating a new non-merge commit for each incoming commit, and create
    a merge commit to update the target branch.

  - Rebase and fast-forward,  rebase + merge --ff-only
     Rebase commits from the source branch onto the target branch,
    creating a new non-merge commit for each incoming commit, and
    fast-forward the target branch with the resulting commits.

  - Squash,  --squash
    Combine all commits into one new non-merge commit on the target branch.

  - Squash, fast-forward only, --squash --ff-only
    If the source branch is out of date with the target branch, reject
    the merge request. Otherwise, combine all commits into one new
    non-merge commit on the target branch.
[[}]]

## GitLab CI to publish HTML pages:  [[{]]
  https://roneo.org/en/framagit-render-html/

  You can render HTML using the Gitlab CI. This doc was redacted for
  Framagit, from the french non-profit Framasoft, which uses Gitlab
  Pages. You just need to adapt the path.

  A service called GitHack seems to propose the same, I didn't test it
  though.
[[}]]


## bash "$-" read-variable:  [[{]]
  · prints/reports current set-of-options in current shell.
    ex.: "im...." outputs means following options are enabled:
          m - monitor     : (set -m),
              REF: https://unix.stackexchange.com/questions/196603/can-someone-explain-in-detail-what-set-m-does
          i - interactive :

 INTERACTIVE=0
 case "$-" in
   *i*)
     SUDO_OPTS=""
     ;;
   *)
     SUDO_OPTS="--non-interactive" # fail-fast if sudo user is not passwordless
     ;;
 esac

 sudo ${SUDO_OPTS} ...
[[}]]

## Resizing containers with the Device Mapper: [[{containerization.image,storage,PM.TODO]]
<http://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/>
[[}]]

## Rootless Docker: [[{containerization.docker,qa,containerization.security,PM.TODO]]
<https://docs.docker.com/engine/security/rootless/>
[[}]]

## Show image change history: [[{containerization.image.build]]
   $ docker history /clock:1.0
[[}]]

## Commit image modifications [[{containerization.image.build]]
(Discouraged most of the time, modify Dockerbuild instead)
host-mach $ docker run -it ubuntu bash     # Boot up existing image
container # apt-get install ...            # Apply changes to running instance
host-mach $ docker diff $(docker ps -lq)   # Show changes done in running container
host-mach $ docker commit $(docker ps -lq) # Commit/Confirm changes
host-mach $ docker tag figlet              # Tage new image
host-mach $ docker run -it figlet          # Boot new image instance
[[}]]

## Selenium Browser test automation [[{ci/cd,qa,testing,selenium,web,_PM.low_code,PM.TODO]]
See also QAWolf:
[[}]]


## Packaging Apps:  [[{containerization.image.build,InfraAsCode.pulumi,doc_has.comparative,]]
                   [[dev_stack.kubernetes.ballerina,dev_stack.metaparticle,dev_language.java]]
<https://www.infoq.com/articles/metaparticle-pulumi-ballerina/>
- Packaging Applications for Docker and Kubernetes approaches comparative.
  • Metaparticle:
    - Looks to be discontinued (last update in github 2020-06-25)
    - provides a standard library to create cloud native apps directly deployable on k8s
      supporting (2018-07-24) Java, .NET core, Javascript (NodeJS), Go, Python and Ruby.

  • Pulumi: Aims to define Infra-as-code  (vs "silly" YAML files).
           """ It is going to DevOps what React did to web development """ (according to their authors)
    - Web service. WARN: Potential vendor lock-in (account registration in pulumi.io needed)
    - Focused on Infra-as-code.
    - Support JS, Typescript, Python, Go on AWS, Azure, GCP and k8s (multi-cloud).

  • Ballerina: language to generate k8s + Istio YAMLs.
    - first-class support for APIs, distributed transactions, circuit-breakers, stream processing,
      data-access, JSON, XML, gRPC, and many other integration challenges.
    - Ballerina compiler understands the architecture around it with microservices directly
      deployable into Docker or Kubernetes by auto generating Docker images and YAML's.
    - https://v1-0.ballerina.io/learn/by-example/
    - WARN : It uses its own language (vs Java, Go, ...)
[[}]]


## Bash: Search&Replace with regexs:
  https://stackoverflow.com/questions/13043344/search-and-replace-in-bash-using-regular-expressions

  hello=ho02123ware38384you443d34o3434ingtod38384day
  re='(.*)[0-9]+(.*)'
  while [[ $hello =~ $re ]]; do
    hello=${BASH_REMATCH[1]}${BASH_REMATCH[2]}
  done
  echo "$hello"

## https://stackoverflow.com/questions/19758915/keeping-a-branch-up-to-date-with-master @ma [[{git.101}]]

## DevOps pipelines DONT's:
  https://jamesjoshuahill.github.io/talk/2018/12/06/how-not-to-build-a-pipeline/

## git update branches:
https://jamesjoshuahill.github.io/note/2015/02/07/is-your-branch-up-to-date/  [[{Git.101]]
Nearly everything you do with git happens on your machine. Don’t
take my word for it. Turn off your wifi and see how many git commands
you can run. You’ll see fetch, pull and push fail without a
connection to your remote, but try the other commands you can think
of: status, commit, checkout, cherry-pick, merge, rebase, diff, log
and see how many times git tells you that you’re up-to-date. How
can you be up-to-date if you’re disconnected?
....
When you run git fetch origin the list of branches and commit history
is downloaded from GitHub and synchronised into the clone on your
machine. Doing a fetch won’t affect your local branches, so it’s
one of the safest git commands you can run. You can fetch as much as
you like.
[[}]]

## An Interview With Linus Torvalds: Linux and Git
  https://www.tag1consulting.com/blog/interview-linus-torvalds-linux-and-git

## DevOps, ansible: what ansible is not  [[{ansible.101]]
  https://www.linkedin.com/pulse/ansible-what-marcel-koert/ [[}]]

## https://www.30secondsofcode.org/git/p/1
  30 secs recipes:
  · How does Git's fast-forward mode work?
  · Prints a list of all local branches sorted by date.
  · Prints a list of all merged local branches.
  · Delete merged branches
  · Deletes all local merged branches.
  · Create a git commit with a different date
  · Purge a file from history
  · Completely purges a file from history.
  · View a visual graph of the repository
  · Disables the default fast forwarding on merge commits.
  · Prints a list of lost files and commits.
  · ...

## 5 tips for configuring virtualenvs with Ansible Tower
  https://www.redhat.com/sysadmin/virtualenvs-ansible-tower

## How to Create Your Own Repositories for Packages
  https://www.percona.com/blog/2020/01/02/how-to-create-your-own-repositories-for-packages/

## This is how a #GitOps pipeline looks like.
NEXT) Firstly, the user changes the code in the Git repository.
NEXT) Then a container image gets created, and it is pushed to the container registry.
NEXT) It gets updated into a config updater.
NEXT) Once a user creates a pull request to merge to a different branch, it deploys to the concerned branch.
NEXT) Then it tests whether it is all good or not.
NEXT) Once it’s all good, the reviewer will be able to merge it.
NEXT) After the merge, it goes to the test branch.
NEXT) Once you create a pull request, it will deploy to that test
    branch. Below are a few popular GitOps tools that you must try while
    working on GitOps workflows. • Flux: Flux was created in 2016 by
    Weaveworks. It is a GitOps operator for your Kubernetes cluster. •
    ArgoCD: ArgoCD is also a GitOps operator but with a web user
    interface. • Jenkins X: A CICD solution for Kubernetes clusters but
    different than classic Jenkins. • WKSctl: A GitOps tool that uses
    Git commits to manage the Kubernetes cluster. • Gitkube: Gitkube is
    ideal for development where it uses Git push to build and deploy
    docker images on a Kubernetes cluster. • Helm Operator: an
    open-source Kubernetes operator to manage helm chart releases
    declaratively. Know more about GitOps: http://bit.ly/393ahpv

    https://geekflare.com/gitops-introduction/

## Chuletario de pócimas y recetas: Monitoring uninterruptible system calls.
  http://chuletario.blogspot.com/2011/05/monitoring-uninterruptible-system-calls.html?m=1

## DevSecOps: Image scanning in your pipelines using quay.io scanner
  https://www.redhat.com/sysadmin/using-quayio-scanner

## git-pw:  [[{]]
<http://jk.ozlabs.org/projects/patchwork/>
<https://www.collabora.com/news-and-blog/blog/2019/04/18/quick-hack-git-pw/>
- git-pw requires patchwork v2.0, since it uses the
  new REST API and other improvements, such as understanding
  the difference between patches, series and cover letters,
  to know exactly what to try and apply.

- python-based tool that integrates git and patchwork.

  $ pip install --user git-pw


  $ git config pw.server https://patchwork.kernel.org/api/1.1
  $ git config pw.token YOUR_USER_TOKEN_HERE

*Daily work example: finding and applying series*

- Alternative 1: Manually
  - We could use patchwork web UI search engine for it.
    - Go to "linux-rockchip" project
    - click on _"Show patches with" to access the filter menu.
    - filter by submitter.

- Alternative 2: git-pw (REST API wrapper)
  - $ git-pw --project linux-rockchip series list "dynamically"
    → ID    Date         Name              Version   Submitter
    → 95139 a day ago    Add support ...   3         Gaël PORTAY
    → 93875 3 days ago   Add support ...   2         Gaël PORTAY
    → 3039  8 months ago Add support ...   1         Enric Balletbo i Serra


  - Get some more info:
    $ git-pw series show 95139
    → Property    Value
    → ID          95139
    → Date        2019-03-21T23:14:35
    → Name        Add support for drm/rockchip to dynamically control the DDR frequency.
    → URL         https://patchwork.kernel.org/project/linux-rockchip/list/?series=95139
    → Submitter   Gaël PORTAY
    → Project     Rockchip SoC list
    → Version     3
    → Received    5 of 5
    → Complete    True
    → Cover       10864561 [v3,0/5] Add support ....
    → Patches     10864575 [v3,1/5] devfreq: rockchip-dfi: Move GRF definitions to a common place.
    →     10864579 [v3,2/5] : devfreq: rk3399_dmc: Add rockchip, pmu phandle.
    →     10864589 [v3,3/5] devfreq: rk3399_dmc: Pass ODT and auto power down parameters to TF-A.
    →     10864591 [v3,4/5] arm64: dts: rk3399: Add dfi and dmc nodes.
    →     10864585 [v3,5/5] arm64: dts: rockchip: Enable dmc and dfi nodes on gru.


  - Applying the entire series (or at least trying to):
    $ git-pw series apply 95139
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    fetch all the patches in the series, and apply them in the right order.
[[}]]

## SaST-scan [[{devops.security.101]]
https://github.com/AppThreat/sast-scan
This repo builds appthreat/sast-scan (and
quay.io/appthreat/sast-scan), a container image with a number of
bundled open-source static analysis security testing (SAST) tools.
This is like a Swiss Army knife for DevSecOps engineers.

- Features
  - No messy configuration and no server required
  - Scanning is performed directly in the CI and is extremely quick. Full scan often takes only couple of minutes
  - Gorgeous HTML reports that you can proudly share with your colleagues, and the security team
  - Automatic exit code 1 (build breaker) with critical and high vulnerabilities
  - There are a number of small things that will bring a smile to any DevOps team

   Bundled tools
   Programming Language    Tools
   ansible                 ansible-lint
   apex                    pmd
   aws                     cfn-lint, cfn_nag
   bash                    shellcheck
   bom                     cdxgen
   credscan                gitleaks
   depscan                 dep-scan
   go                      gosec, staticcheck
   java                    cdxgen, gradle, find-sec-bugs, pmd
   jsp                     pmd
   json                    jq, jsondiff, jsonschema
   kotlin                  detekt
   kubernetes              kube-score
   nodejs                  cdxgen, NodeJsScan, eslint, yarn
   puppet                  puppet-lint
   plsql                   pmd
   python                  bandit, cdxgen, pipenv
   ruby                    cyclonedx-ruby
   rust                    cdxgen, cargo-audit
   terraform               tfsec
   Visual                  Force (vf)   pmd
   Apache                  Velocity (vm)    pmd
   yaml                    yamllint
[[}]]

## <http://alblue.bandlem.com/2011/11/git-tip-of-week-git-notes.html>

## Managing "many-branches" Git projects:
  - sync all local/remote branches:
  https://stackoverflow.com/questions/27157166/sync-all-branches-with-git

## gitbase: Query Git with SQL [[{]]
  https://opensource.com/article/18/11/gitbase
[[}]]

## online SSH Certificate Authority  [[{]]
https://github.com/smallstep/certificates
  An online SSH Certificate Authority
  · Delegate SSH authentication to step-ca by using SSH
    certificates instead of public keys and authorized_keys files
  · For user certificates, connect SSH to your single sign-on
    provider, to improve security with short-lived certificates and MFA
    (or other security policies) via any OAuth OIDC provider.
  · For host certificates, improve security, eliminate TOFU
    warnings, and set up automated host certificate renewal
[[}]]


## https://docs.ipfs.io/how-to/host-git-style-repo/ [[{]]]
  serve a read-only Git repository through the IPFS network.
  end result: git cloneable url served through IPFS!

1) git clone --bare git@myhost.io/myrepo   <·· --bare: don't create working tree, just .git object store
2) cd myrepo
3) git update-server-info                  <·· Add metadata information to .git/info and .git/objects/info
                                               in order to help clients discover what references and packs
                                               the server has. (needed for HTTP -vs ssh -)
4) mv objects/pack/*.pack .                <·· Optional, unpack "large packfile" into its individual
   git unpack-objects < *.pack                 objects, allowing IPFS to deduplicate objects if
   rm -f *.pack objects/pack/*                 the Git repository is duplicated 2+ times

(at this point the repository is ready to be served)

5) $ ipfs add -r .                         <·· add current repo to ipfs
     ...
     added QmX679gmfyaRkKMvPA4WGNWXj9PtpvKWGPgtXaF18etC95 .   <- Hash identifying the directory in IPFS

- Test setup: -----------------------------
  $ cd "some_new_and_clean_path"
  $ REPO_HASH="QmX679gmfya..."
  $ git clone http://${REPO_HASH}.ipfs.localhost:8080/ myrepo  <·· Cloning git from IPFS!!!!

- See also: https://dev.to/woss/part-1-rehosting-git-repositories-on-ipfs-23bf
  truly distributed way of hosting the git repository
  AT A SPECIFIC REVISION, TAG, OR BRANCH,
[[}]]

## Top 10 container guides for sysadmins | Enable Sysadmin
  https://www.redhat.com/sysadmin/containers-articles-2021
## bash: Parse Arguments in Bash Scripts With getopts
  https://ostechnix.com/parse-arguments-in-bash-scripts-using-getopts/
## Dockerfile Linter Hadolint Brings Fixes and Improvements, and Support for ARM64 Binaries
  https://www.infoq.com/news/2022/04/hadolint-dockerfile-linter/

## How to Use S3 as a Private Git Repository
  https://fancybeans.com/2012/08/24/how-to-use-s3-as-a-private-git-repository/
  Basically, use git for local commands that manipulate the local
  repository (adding, committing, merging) and jgit for any
  interactions that involve sending or receiving data from the S3
  bucket.

  https://github.com/bgahagan/git-remote-s3
  Push and pull git repos to/from an s3 bucket. Uses gpg to encrypt the
  repo contents (but not branch names!) before sending to s3.

  https://www.petekeen.net/hosting-private-git-repositories-with-gitolite

  Step 1: Install Gitolite

  Gitolite is a system for managing git repositories using git itself
  to manage the configuration. Essentially, after initial configuration
  you make all changes by editing a config file, committing it, and
  pushing up to your git server.

  Gitolite installation is pretty straightforward:


## Nexus Repository Management:[[{containerization.image.registry]]
  https://blog.sonatype.com/using-nexus-3-as-your-repository-part-1-maven-artifacts
  https://blog.sonatype.com/using-nexus-3-as-your-repository-part-2-npm-packages
  https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images
  - See also: Artifactory by JFrog
[[}]]

## Run docker container as current user:
  https://jtreminio.com/blog/running-docker-containers-as-current-host-user/

[[PM.TODO}]]

# Git modules @ /usr/lib/git-core
  ```
  3_097_168 /usr/lib/git-core/git
  1_839_328 /usr/lib/git-core/git-remote-http
  1_835_232 /usr/lib/git-core/git-http-push
  1_831_520 /usr/lib/git-core/git-imap-send
  1_823_072 /usr/lib/git-core/git-fast-import
  1_822_912 /usr/lib/git-core/git-http-fetch
  1_8048 /usr/lib/git-core/git-remote-testsvn
  1_793_856 /usr/lib/git-core/git-daemon
  1_789_984 /usr/lib/git-core/git-http-backend
  1_773_344 /usr/lib/git-core/git-shell
  1_773_280 /usr/lib/git-core/git-credential-cache--daemon
  1_773_216 /usr/lib/git-core/git-sh-i18n--envsubst
  1_773_216 /usr/lib/git-core/git-credential-store
  1_773_216 /usr/lib/git-core/git-credential-cache
     46_169 /usr/lib/git-core/git-add--interactive
     29_600 /usr/lib/git-core/git-rebase--preserve-merges
     25_832 /usr/lib/git-core/git-submodule
     22_362 /usr/lib/git-core/git-instaweb
     17_485 /usr/lib/git-core/git-subtree
     16_938 /usr/lib/git-core/git-sh-prompt
     16_411 /usr/lib/git-core/git-legacy-stash
     16_334 /usr/lib/git-core/git-filter-branch
     10_297 /usr/lib/git-core/git-mergetool
      9_306 /usr/lib/git-core/git-sh-setup
      9_201 /usr/lib/git-core/git-mergetool--lib
      8_290 /usr/lib/git-core/git-bisect
      4_401 /usr/lib/git-core/git-web--browse
      4_130 /usr/lib/git-core/git-request-pull
      4_096 /usr/lib/git-core/mergetools
      3_695 /usr/lib/git-core/git-merge-one-file
      3_693 /usr/lib/git-core/git-quiltimport
      2_650 /usr/lib/git-core/git-parse-remote
      2_477 /usr/lib/git-core/git-merge-octopus
      2_448 /usr/lib/git-core/git-sh-i18n
      2_236 /usr/lib/git-core/git-difftool--helper
        944 /usr/lib/git-core/git-merge-resolve
  ```

# implementing container manager
* <https://iximiuz.com/en/series/implementing-container-manager/>

# Container Tools, Tips, and Tricks - Issue #2
* <https://iximiuz.ck.page/posts/container-tools-tips-and-tricks-issue-2>

# DevOps: Merging " GitHub repos without losing commit history
* <https://hacks.mozilla.org/2022/08/merging-two-github-repositories-without-losing-commit-history/>

# TAXONOMY:
1. Basic Linux commands necessary before jumping    into shell script.                               
  * https://lnkd.in/dBTsJbhz                       
  * https://lnkd.in/dHQTiHBB
  * https://lnkd.in/dA9pAmHa
2. Shell Scripting
  * https://lnkd.in/da_wHgQH
  * https://lnkd.in/d5CFPgga
3. Python: This will help you in automation         
  * https://lnkd.in/dFtNz_9D                          
  * https://lnkd.in/d6cRpFrY                          
  * https://lnkd.in/d-EhshQz

4. Networking
  * https://lnkd.in/dqTx6jmN
  * https://lnkd.in/dRqCzbkn

5. Git & Github                 
  * https://lnkd.in/d9gw-9Ds      
  * https://lnkd.in/dEp3KrTJ      
                                
6. YAML                   
  * https://lnkd.in/duvmhd5X  
  * https://lnkd.in/dNqrXjmV  

7. Containers — Docker:
  * https://lnkd.in/dY2ZswMZ 
  * https://lnkd.in/d_EySpbh
  * https://lnkd.in/dPddbJTf

8.CI/CD:                      
  * https://lnkd.in/dMHv9T8U    

9. Container Orchestration — Kubernetes:
  * https://lnkd.in/duGZwHYX

10. Monitoring:              
  * https://lnkd.in/dpXhmVqs   
  * https://lnkd.in/dStQbpRX   
  * https://lnkd.in/de4H5QVz   
  * https://lnkd.in/dEtTSsbB   
                             
11. Infrastructure Provisioning       
    & Configuration Management (IaC): 
    Terraform, Ansible, Pulumi        
   * https://lnkd.in/dvpzNT5M         
   * https://lnkd.in/dNugwtVW         
   * https://lnkd.in/dn5m2NKQ         
   * https://lnkd.in/dhknHJXp         
   * https://lnkd.in/ddNxd8vU         
                                      
12. CI/CD Tools: Jenkins,
  GitHub Actions, GitLab CI,
  Travis CI, AWS CodePipeline
  + AWS CodeBuild, Azure DevOps, etc
  * https://lnkd.in/dTmSXNzv
  * https://lnkd.in/dAnxpVTe
  * https://lnkd.in/daMFG3Hq
  * https://lnkd.in/dqf-zzrx
  * https://lnkd.in/diWP7Tm7
  * https://lnkd.in/dYDCSiiC
                            
13. AWS:                    
* https://lnkd.in/dmi-TMv9  
* https://lnkd.in/de3-dAB6  
* https://lnkd.in/dh2zXZAB  
* https://lnkd.in/dQMyCBWy  
                            
Best Websites to learn Devops:
* https://kodekloud.com
* https://acloudguru.com
* https://www.katacoda.com


# Vim documentation: windows
* <https://vimdoc.sourceforge.net/htmldoc/windows.html#window>

* Learn to become DevOps Engineer or SRE
* <https://roadmap.sh/devops> [[{doc_has.roadmap}]]


## DevOps exercises
DevOps exercises: bregman-arie/devops-exercises: Linux, Jenkins, AWS, 
SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, 
OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, 
Virtualization. DevOps Interview 
Questionshttps://github.com/bregman-arie/devops-exercises 

## terraform tools: Top 10 
for #Devops TFLint (https://lnkd.in/gn4QBzTM): A Terraform linter 
that checks your configuration files for potential errors, best 
practices, and style violations Terrascan (https://lnkd.in/gxmzE-nm): 
A static code analyzer specifically designed for Terraform 
configurations, providing security scanning and compliance checks 
Terramate (https://lnkd.in/gEW9ythN): A plugin for popular text 
editors (like Visual Studio Code) that provides advanced features for 
working with Terraform, such as syntax highlighting, autocompletion, 
and documentation lookup TFSwitch (https://lnkd.in/g9gseDcn): A 
command-line tool that simplifies switching between different 
versions of Terraform, allowing you to manage and switch between 
multiple Terraform installations Tfsec (https://lnkd.in/gpYuCtF2): A 
security scanner for Terraform templates that identifies potential 
security issues and provides recommendations to improve security 
posture Checkov (https://lnkd.in/g4Py3WYN): An open-source static 
analysis tool for infrastructure-as-code files, including Terraform. 
It scans your code for security vulnerabilities and policy violations 
Terraform Compliance (https://lnkd.in/g7C5fQep): Validate Terraform 
configurations against security and compliance policies Terraform 
Landscape (https://lnkd.in/gduDAit5): Visualize Terraform plan with 
highlighted resource changes Terraform Graph 
(https://lnkd.in/gdp-8FAr): Generate a visual representation of 
resource dependencies in your configuration Terraform CDK 
(https://lnkd.in/gdvUXwHh): Define infrastructure using programming 
languages like TypeScript, Python, and Java#iac #infrastructureascode 
#automation #developer #sysops #linux #sysadmin #solutionsarchitect 
#oss #security #cloud

## Traefik 3.1: comprehensive support for Kubernetes Gateway API

* several contributions aimed at stabilizing and extending its functionalities. (no experimental anymore)
* Gateway API supported: version 1.1.0
* enhancements to HTTPRoute capabilities.
* New ReferenceGrant feature: addressing security concerns in cross-namespace references
  — A SIGNIFICANT ENHANCEMENT FOR MULTI-TENANT ENVIRONMENTS.
* 3.1 facilitates full-featured WASM plugins that can perform HTTP calls and integrate 
  various Go libraries, enhancing the flexibility and power of Traefik’s plugins.
* better error handling and status reporting for HTTPRoutes.