Posts

Showing posts from October, 2022

AWS Links

 Aurora https://docs.aws.amazon.com/ AmazonRDS/latest/ AuroraUserGuide/Aurora. Overview.Endpoints.html https://docs.aws.amazon.com/ AmazonRDS/latest/ AuroraUserGuide/Concepts. AuroraHighAvailability.html         https://docs.aws.amazon.com/ lambda/latest/dg/invocation- eventsourcemapping.html   https://aws.amazon.com/blogs/ compute/announcing-improved- vpc-networking-for-aws-lambda- functions/ https://docs.aws.amazon.com/ lambda/latest/dg/ gettingstarted-concepts.html https://docs.aws.amazon.com/ lambda/latest/dg/invocation- scaling.html https://docs.aws.amazon.com/ lambda/latest/dg/ configuration-concurrency.html https://docs.aws.amazon.com/ lambda/latest/dg/provisioned- concurrency.html Reserved concurrency for function put upper limit on number of requests(use to limit overloading downstream system), can use autoscaling for provisioned concurrency https://aws.amazon.com/blogs/ compute/understanding-aws- lambda-scaling-and-throughput/ In asynchronous operation, the caller pl

Load Balancer

   https://docs.aws.amazon.com/ elasticloadbalancing/latest/ classic/elb-manage-subnets. html https://docs.aws.amazon.com/ vpc/latest/userguide/ configure-subnets.html When you add a subnet to your load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones. For load balancers in a VPC, we recommend that you add one subnet per Availability Zone for at least two Availability Zones.  Select subnets from the same Availability Zones as your instances. If your load balancer is an internet-facing load balancer, you must select public subnets in order for your back-end instances to receive traffic from the load balancer (even if the back-end instances are in private subnets). If your load balancer is an internal load balancer, we recommend that you select private subnets.  You can add at most one subnet per Availability Z

ECS

  https://medium.com/boltops/ gentle-introduction-to-how- aws-ecs-works-with-example- tutorial-cea3d27ce63d Cluster creation : Template type - Networking only (Fargate), EC2 type Roles Task execution IAM role , this role helps to pull images from your docker registery Task Role - for accessing AWS services  Service creation Type - Fargate/EC2 Type - Replica No of task instances  Min, Max num of healthy instances  Service Auto Scaling is made possible by a combination of the Amazon ECS, CloudWatch, and Application Auto Scaling APIs. Services are created and updated with Amazon ECS, alarms are created with CloudWatch, and scaling policies are created with Application Auto Scaling. Application Auto Scaling turns off scale-in processes while Amazon ECS deployments are in progress. However, scale-out processes continue to occur, unless suspended, during a deployment. Target tracking, step, schedule scaling supported More complex explanation  https://docs.aws. amazon.com/AmazonECS/latest/ d

Metrics

  Throughput indicates the number of transactions over a given time range TPS Latency -  Average Transaction Duration indicates the average response time for all occurrences of a given transaction. P95 - 5 % of time performance is above threshold 

RDS

    Multi-AZ DB instance deployment . - one standby instance , but doesn’t serve reads.  The primary DB instance is synchronously replicated across Availability Zones to a standby replica.  can have increased write and commit latency compared to a Single-AZ deployment. This can happen because of the synchronous data replication that occurs. You might have a change in latency if your deployment fails over to the standby replica, although AWS is engineered with low-latency network connectivity between Availability Zones. For production workloads, we recommend that you use Provisioned IOPS (input/output operations per second) for fast, consistent performance Multi-AZ DB cluster deployment . - multiple standby instance ,  serve reads

Backups Snapshots

 Backups - full data backup, takes long time, difference  in actual data between data at start and end, can be stored anywhere, could be restored anywhere  Snapshots  - point in time picture of server, quick, but only could be stored in same VM/cloud where it was taken, specific to cloud vendor, could be used to restore in case of data loss quickly, works by storeing metadata and change log of block storage   Amazon RDS provides two different methods for backing up and restoring your DB instance(s)  automated backups  and  database snapshots  (DB Snapshots). Amazon RDS provides two different methods for backing up and restoring your DB instance(s)  automated backups  and  database snapshots  (DB Snapshots). When restored new endpoint created Auto backups (7days storage, up to 35) enables  point-in-time recovery  of your DB instance. When automated backups are turned on for your DB Instance, Amazon RDS automatically performs a full daily snapshot of your data (during your preferred back

ECS scaling

    ECS Cluster Auto Scaling  (CAS) is a new capability for ECS to manage the scaling of  EC2 Auto Scaling Groups  (ASG).  CAS relies on  ECS capacity providers , which provide the link between your ECS cluster and the ASGs Each ASG is associated with a capacity provider, and each such capacity provider has only one ASG, but many capacity providers can be associated with one ECS cluster. In order to scale the entire cluster automatically, each capacity provider manages the scaling of its associated ASG. Design goal #1 : CAS should scale the ASG out (adding more instances) whenever there is not enough capacity to run tasks the customers is trying to run. Design goal #2 : CAS should scale in (removing instances) only if it can be done without disrupting any tasks (other than daemon tasks). Design goal #3 : customers should maintain full control of their ASGs, including the ability to set the minimum and maximum size, use other scaling policies, configure instance types, etc. Uses ECS

General AWS

 Ec2 reserved instances   Billed regardless of usage. Standard and convertible (can change instance family, OS etc). Sell in market place  Provisoned iops Provisioned IOPS volumes, backed by solid-state drives (SSDs), are the highest performance  Elastic Block Store  (EBS) storage volumes designed for your critical, IOPS-intensive and throughput-intensive workloads that require low latency. DR strategy 1. Backup/restore RTO - hours  2. Pilot light - RTO 10s of mins - some resources provisioned like ALB but no ec2, data is live 3. Warm standby - RTO mins - scaled down resources running for testing 4. Multi site active active 

Dynamo db

  Design Keep related data together Use few tables Use sort order  Related items can be grouped together and queried efficiently if their key design causes them to sort together. The primary key that uniquely identifies each item in an Amazon DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key). The primary key that uniquely identifies each item in an Amazon DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key). You can determine the access patterns that your application requires, and read and write units that each table and secondary index requires. By default, every partition in the table will strive to deliver the full capacity of 3,000 RCU and 1,000 WCU. The total throughput across all partitions in the table may be constrained by the provisioned throughput in provisioned mode, or by the table level throughput limit in on-demand mode. DynamoDB provides some flexibility for

Well Architected Framework

  Operational Excellence Security Reliability Performance Efficiency Cost Optimization Sustainability Operation   Perform operations as code:   Make frequent, small, reversible changes Refine operations procedures frequently: Anticipate failure: Learn from failure Security  Implement a strong identity foundation:  Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources. Centralize identity management, and aim to eliminate reliance on long-term static credentials. Enable traceability:  Monitor, alert, and audit actions and changes to your environment in real time. Integrate log and metric collection with systems to automatically investigate and take action. Apply security at all layers:  Apply a defense in depth approach with multiple security controls. Apply to all layers (for example, edge of network, VPC, load balancing, every instance and compute service, operating system, application, an

Deployment strategies

 https://harness.io/blog/blue-green-canary-deployment-strategies https://deployplace.com/blog/canary-deployment/