Posts

DKIM and OCI Email

DKIM stands for Domain Key Identified Mail. It allows an organization to take responsbility to sign emails sent from its domains. It uses pub/private key to sign and allow receiver to verify the authenticity of the email. It prevent mail spoofing and proliferation attacks. OCI email delivery allows to configure DKIMs for email domains.When DKIM is created, it generates a CNAME record to be used in email domain configuraiton. Email Configuration Create a User and generate SMTP Credentials for that user. This user/credentials is used to login to the SMTP server. Create an approved sender in the Email Domain. This will be the from address. Attach a DKIM to the sender/domain.

OCI Network Security Groups and Security Rules

Image
NSG allows access control to a group of resources that have the same security posture. NSG consists of two items a security list and VNIC. Stateful Vs Stateless security rules: For high volume traffic it is recommended to you stateless security rules. This is because stateful security list tracks connection information to allow by directional traffic (response traffic to be sent). This state is stored in the compute instance and the network table might get full for high volume traffic.

AWS Links

 Aurora https://docs.aws.amazon.com/ AmazonRDS/latest/ AuroraUserGuide/Aurora. Overview.Endpoints.html https://docs.aws.amazon.com/ AmazonRDS/latest/ AuroraUserGuide/Concepts. AuroraHighAvailability.html         https://docs.aws.amazon.com/ lambda/latest/dg/invocation- eventsourcemapping.html   https://aws.amazon.com/blogs/ compute/announcing-improved- vpc-networking-for-aws-lambda- functions/ https://docs.aws.amazon.com/ lambda/latest/dg/ gettingstarted-concepts.html https://docs.aws.amazon.com/ lambda/latest/dg/invocation- scaling.html https://docs.aws.amazon.com/ lambda/latest/dg/ configuration-concurrency.html https://docs.aws.amazon.com/ lambda/latest/dg/provisioned- concurrency.html Reserved concurrency for function put upper limit on number of requests(use to limit overloading downstream system), can use autoscaling for provisioned concurrency https://aws.amazon.com/blogs/ compute/understanding-aws- lambda-scaling-and-throughput/ In asynchronous operation, the caller pl

Load Balancer

   https://docs.aws.amazon.com/ elasticloadbalancing/latest/ classic/elb-manage-subnets. html https://docs.aws.amazon.com/ vpc/latest/userguide/ configure-subnets.html When you add a subnet to your load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones. For load balancers in a VPC, we recommend that you add one subnet per Availability Zone for at least two Availability Zones.  Select subnets from the same Availability Zones as your instances. If your load balancer is an internet-facing load balancer, you must select public subnets in order for your back-end instances to receive traffic from the load balancer (even if the back-end instances are in private subnets). If your load balancer is an internal load balancer, we recommend that you select private subnets.  You can add at most one subnet per Availability Z

ECS

  https://medium.com/boltops/ gentle-introduction-to-how- aws-ecs-works-with-example- tutorial-cea3d27ce63d Cluster creation : Template type - Networking only (Fargate), EC2 type Roles Task execution IAM role , this role helps to pull images from your docker registery Task Role - for accessing AWS services  Service creation Type - Fargate/EC2 Type - Replica No of task instances  Min, Max num of healthy instances  Service Auto Scaling is made possible by a combination of the Amazon ECS, CloudWatch, and Application Auto Scaling APIs. Services are created and updated with Amazon ECS, alarms are created with CloudWatch, and scaling policies are created with Application Auto Scaling. Application Auto Scaling turns off scale-in processes while Amazon ECS deployments are in progress. However, scale-out processes continue to occur, unless suspended, during a deployment. Target tracking, step, schedule scaling supported More complex explanation  https://docs.aws. amazon.com/AmazonECS/latest/ d

Metrics

  Throughput indicates the number of transactions over a given time range TPS Latency -  Average Transaction Duration indicates the average response time for all occurrences of a given transaction. P95 - 5 % of time performance is above threshold 

RDS

    Multi-AZ DB instance deployment . - one standby instance , but doesn’t serve reads.  The primary DB instance is synchronously replicated across Availability Zones to a standby replica.  can have increased write and commit latency compared to a Single-AZ deployment. This can happen because of the synchronous data replication that occurs. You might have a change in latency if your deployment fails over to the standby replica, although AWS is engineered with low-latency network connectivity between Availability Zones. For production workloads, we recommend that you use Provisioned IOPS (input/output operations per second) for fast, consistent performance Multi-AZ DB cluster deployment . - multiple standby instance ,  serve reads