AWS

S3

We can store unlimited data into s3 but maximum size if a file at a time is 5TB

By default s3 is storing the data is S3 standard zone which having 9 copy

That 9 copy region will decide by aws, form where the more requests are come it will store to that region only

Storage class in S3

  1. Standard zone(SSD-9 copy)
  2. Standard infrequently Accessed (SSD-6 copy)
  3. Reduced redundancy standard(SSD-2 copy)
  4. Glacier (Tape – 1copy)

S3 Standard: Ideal for frequently accessed data. It provides high durability, availability, and performance object storage.

S3 Intelligent-Tiering: Automatically moves data between two access tiers when access patterns change, offering cost savings without performance impact.

S3 Standard-IA (Infrequent Access): Designed for data that is accessed less frequently, but requires rapid access when needed. It offers lower storage costs and higher retrieval costs compared to S3 Standard.

S3 One Zone-IA: For infrequently accessed data that does not require multiple availability zone resilience. Itโ€™s more cost-effective than Standard-IA but less durable.

S3 Glacier: Used for data archiving with retrieval times ranging from minutes to hours. Itโ€™s a low-cost storage class for long-term data.

S3 Glacier Deep Archive: The lowest-cost storage class, ideal for long-term data that is rarely accessed, with retrieval times of 12 hours or more.

CDN

Cloudfront: Amazon CloudFront is a Content Delivery Network (CDN) service that securely delivers content (web pages, videos, APIs, etc.) with low latency and high transfer speeds. It uses a network of Edge Locations to cache and serve content closer to users globally.
Edge Location: An Edge Location is a physical data center in the AWS global network where CloudFront caches content. These are positioned globally to deliver content faster to end-users.

How many SG can be assigned to a EC2?

An EC2 instance in AWS can have a maximum of 5 Security Groups assigned to it at any given time.

How many rules can be added to SG?

Inbound Rules: Maximum of 60 rules. Outbound Rules: Maximum of 60 rules.

What is instance profile?

An instance profile in AWS (Amazon Web Services) is a container for an IAM (Identity and Access Management) role that you can use to assign permissions to an EC2 instance. This allows your EC2 instances to make API requests to AWS services on your behalf.

  • IAM Role: Defines a set of permissions and can be assumed by trusted entities, like an EC2 instance. we can use it for any service
  • Instance Profile: Contains the role and can be attached to an EC2 instance. it is only applicable for ec2

What is inline policy in aws ?

In AWS, an inline policy is a policy that’s created and embedded directly into a user, group, or role. when we are creating user that directly we can attach it.

What is the main difference between cloudtrail and cloudwatch ?

  • CloudTrail: Tracks and records API activity and user actions across your AWS account, we use cloudtrail for AWS account monitoring and logging
  • Use Case: Helps with compliance, auditing, and governance by providing a history of AWS API calls.
  • Data: Logs information about who performed what actions, from where, and when (e.g., creating a resource, updating configurations).
  • CloudWatch: Monitors performance metrics and operational data for AWS resources and applications.
  • Use Case: Helps with real-time monitoring, alerting, and visualization of resource usage and application performance.
  • Data: Tracks metrics like CPU usage, memory, disk I/O, latency, and application logs.

When to use AWS Fargate and Amazon EC2 in ECS or EKS ?

Why Fargate?

Use Case: A startup wants to deploy a microservices-based e-commerce application with unpredictable traffic patterns.

  • Why Fargate?
    • The startup doesn’t want to manage infrastructure or worry about scaling.
    • Traffic can spike during sales events, and Fargate’s auto-scaling ensures seamless handling of the load.
    • Each microservice has different resource needs, and Fargate allows specifying CPU and memory per task, optimizing costs.

Scenario 2: Using EC2

Use Case: A large enterprise runs a machine-learning workload that processes data for their analytics platform.

  • Why EC2?
    • The workload requires GPU instances for computation, which Fargate doesnโ€™t support.
    • The team has predictable, sustained usage, making Reserved Instances or Spot Instances a cost-effective option.
    • They need to install specific libraries and software that require direct access to the host OS.

NOTE: as of now, AWS Fargate does not support GPU instances

How can a ec2 private ip can communicate with s3 ?

An EC2 instance with a private IP address in a private subnet can communicate with Amazon S3 without using the public internet by leveraging VPC Endpoints or through a NAT Gateway. Using VPC endpoint we can access the service using private IP

Create a VPC Gateway Endpoint:

  • Go to the VPC Console.
  • Navigate to Endpoints and click Create Endpoint.
  • Choose the S3 service: com.amazonaws.<region>.s3.
  • Select the Gateway type.
  • Choose your VPC and associate the route table(s) of the private subnet where your EC2 instance resides

Cloudfront

Amazon CloudFront is a fast content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers worldwide with low latency and high transfer speeds. It uses a global network of edge locations to cache and distribute content, enhancing performance, scalability, and security for your applications and websites. CloudFront provides SSL/TLS encryption by default, ensuring secure communication with minimal setup.

Steps to host a static website in s3 ?

  • Click on Create bucket.
  • Click Upload and add your website files (e.g., index.html, style.css, images). and upload it
  • In your bucket, go to the Properties tab.
  • Scroll down to the Static website hosting section.
  • Select Enable.
  • For Index document, enter index.html.
  • For Error document (optional), you can enter error.html.
  • Set Bucket Policy to Allow Public Read Access
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my-website-bucket/*"
        }
    ]
}

What is read replicas in RDS ?

Read Replicas in Amazon RDS are copies of your primary database instance that are used to offload read traffic and improve the performance and scalability of your application. They are particularly useful for read-heavy workloads where the primary instance is under significant read load.

  • Offload Read Traffic: Read replicas handle read queries, freeing up the primary database instance for write operations.
  • Asynchronous Replication: Changes made to the primary database are asynchronously replicated to the read replicas, ensuring minimal impact on write performance.
  • Scalability: Multiple read replicas can be created (up to 5 per primary instance for most RDS engines) to distribute the read load across multiple instances.
  • High Availability: While not a replacement for Multi-AZ deployments, read replicas can improve availability by allowing reads even if the primary instance experiences downtime.

You have a production database and it is hosted on RDS, it is experiencing high latency that is impacting the application performance, how to troubleshoot this issue ?

I will first go to the RDC cloudwatch to see the matrics and dashboard to identify CPU memory and I/o bottleneck. Then i will analyse the slow query logs to identify inefficient queries and i will try to optimize them. Then we can consider to scaling the RDC instance to a higher tier or changing storage type based on the resource utilization and the timing and also we can use RDS replica to distribute read load and overall performance. for future purpose we can set the alert and policy in cloudwatch.

How you can handle RDS scaling ?

  • Using Read Replicas: Create one or more read replicas to handle read-heavy workloads.
  • Configure your application to route read queries to replicas and write queries to the primary database.
  • Storage Scaling: RDS supports storage autoscaling for MySQL, MariaDB, PostgreSQL, and Oracle.
  • Enable the storage autoscaling option in the RDS console and set a maximum theresold

What metrics are available on your CloudWatch dashboard?

A CloudWatch dashboard can display various metrics for AWS resources and custom applications. These metrics help monitor the performance, health, and availability of your infrastructure and applications.


1. Common Metrics for AWS Services

a. EC2 Instances:

  • CPU Utilization (%): Tracks the percentage of CPU used.
  • Memory Usage (Custom): Requires an agent to monitor memory utilization.
  • Disk Read/Write Operations: Monitors I/O activity.
  • Network In/Out (Bytes): Tracks data transfer to and from the instance.
  • Status Checks: Reports the health of instance-level and system-level checks.

b. Amazon S3:

  • Number of Objects: Tracks the count of objects stored in a bucket.
  • Bucket Size (Bytes): Monitors storage usage.
  • Get/Put/Delete Requests: Tracks operations on the bucket.

c. Elastic Load Balancer (ELB):

  • Request Count: Number of requests handled by the ELB.
  • Healthy/Unhealthy Host Count: Monitors backend instance health.
  • Latency: Measures request-response time.
  • HTTP 4XX/5XX Errors: Tracks client and server errors.

d. Amazon RDS:

  • CPU Utilization (%): Tracks database instance load.
  • Free Storage Space (GB): Monitors available storage.
  • Read/Write IOPS: Tracks database read/write operations.
  • Database Connections: Monitors the number of open connections.

e. Amazon EKS (Kubernetes):

  • Node CPU/Memory Utilization (Custom): Tracks usage on worker nodes.
  • Pod Count: Number of active pods.
  • API Server Latency: Measures response time for API requests.
  • Disk Space (Custom): Monitors disk usage on nodes.

How do you attach an SSL certificate to an S3 bucket?

You cannot attach an SSL certificate directly to an S3 bucket because S3 itself does not natively support SSL for custom domains. However, you can serve an S3 bucket’s content securely using HTTPS by integrating it with Amazon CloudFront, which allows you to attach an SSL certificate to your custom domain.

  • Prepare Your S3 Bucket: Ensure your S3 bucket is configured to host a static website or serve content publicly.
  • Static Website Hosting: In the S3 bucket settings, enable static website hosting if hosting a website.
  • Request or Import an SSL Certificate: Enter your custom domain name (e.g., www.example.com) and validate ownership using DNS or email.
  • Create a CloudFront Distribution: Navigate to the CloudFront Console and create a new distribution.
  • Distribution Settings: Enter your S3 bucket’s URL (use the S3 website endpoint if static hosting is enabled).
  • Viewer Protocol Policy: Set it to Redirect HTTP to HTTPS or HTTPS Only to enforce secure connections.
  • Custom SSL Certificate: Select the SSL certificate you created or imported in ACM.

What is the maximum runtime for a Lambda function?

The maximum runtime for an AWS Lambda function is 15 minutes (900 seconds).

What is the maximum memory size for a Lambda function?

The maximum memory size for an AWS Lambda function is 10 GB (10,240 MB).

Which aws service we can use to migrate on premise to aws ?

AWS Migration Hub: This service provides a central location to track the progress of your migrations across multiple AWS and partner solutions

You want to deploy a highly available web application across multiple aws regions. what AWS service can help you with this ?

AWS Global Accelerator: This service improves the availability and performance of your applications by using the AWS global network. It can direct traffic to the optimal endpoint based on performance metrics and health checks.

Can we edit CIDR range in VPC ?

No, you cannot edit the CIDR range of an existing VPC in AWS. Once a VPC is created, its CIDR range is fixed and cannot be modified. However, there are a few workarounds depending on your requirements:

  1. Add a Secondary CIDR Block: AWS allows you to add a secondary CIDR block to an existing VPC. This effectively extends the IP address range of your VPC without disrupting the current resources.
  2. Migrate Resources to a New VPC: If you need to completely change the CIDR range, you must create a new VPC and migrate resources.
  3. VPC Peering: If adding a secondary CIDR or migrating resources is not feasible, you can use VPC peering to connect the existing VPC with another VPC that has the desired CIDR range.

What is private link ?

AWS PrivateLink enables private and secure connectivity between VPCs, AWS services, and on-premises systems without using the public internet. It uses VPC Endpoints to route traffic through the AWS backbone network, ensuring low latency and enhanced security.

In our client project, the resources are distributed across two regions. Now, the client wants to ensure that resources cannot be deployed in any other region. How can this be achieved in AWS?

  • Create an AWS Organization in the Management Account.
  • Add member accounts to the organization.
  • Create a Service Control Policy (SCP)
  • Go to the Policies section and click Create policy.
  • Use the following JSON to restrict deployments to specific regions
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowActionsInSpecificRegions",
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:RequestedRegion": [
            "us-east-1",
            "us-west-1"
          ]
        }
      }
    }
  ]
}

What is the role of a route table in VPC?

A route table in an AWS VPC (Virtual Private Cloud) controls the traffic routing within the VPC, allowing resources to communicate with each other and with external networks, such as the internet.

Difference between iam policy and s3 custom policy ?

  • IAM Policy: Use for granting permissions to AWS users, roles, or services across multiple resources (e.g., all S3 buckets in an account).
  • S3 Bucket Policy: Use for managing access at the bucket level, especially for external users, cross-account access, or public access.

๐˜๐จ๐ฎ๐ซ ๐œ๐จ๐ฆ๐ฉ๐š๐ง๐ฒ ๐ฐ๐š๐ง๐ญ๐ฌ ๐ญ๐จ ๐ ๐ซ๐š๐๐ฎ๐š๐ฅ๐ฅ๐ฒ ๐ซ๐ž๐ฅ๐ž๐š๐ฌ๐ž ๐š ๐ง๐ž๐ฐ ๐Ÿ๐ž๐š๐ญ๐ฎ๐ซ๐ž ๐ญ๐จ ๐Ÿ‘๐ŸŽ% ๐จ๐Ÿ ๐ฎ๐ฌ๐ž๐ซ๐ฌ ๐ฐ๐ก๐ข๐ฅ๐ž ๐ญ๐ก๐ž ๐ซ๐ž๐ฆ๐š๐ข๐ง๐ข๐ง๐  ๐Ÿ•๐ŸŽ% ๐š๐œ๐œ๐ž๐ฌ๐ฌ ๐ญ๐ก๐ž ๐จ๐ฅ๐ ๐ฏ๐ž๐ซ๐ฌ๐ข๐จ๐ง. ๐‡๐จ๐ฐ ๐๐จ ๐ฒ๐จ๐ฎ ๐ข๐ฆ๐ฉ๐ฅ๐ž๐ฆ๐ž๐ง๐ญ ๐ญ๐ก๐ข๐ฌ ๐ข๐ง ๐‘๐จ๐ฎ๐ญ๐ž ๐Ÿ“๐Ÿ‘?

Implementing a gradual release of a new feature to 30% of users while the remaining 70% access the old version in Route 53 involves setting up weighted routing. Here’s how you can achieve this:

  1. Create Two Versions: Ensure you have two versions of your application running. One with the new feature and one with the old feature. Let’s call them version-new and version-old.
  2. Set Up Route 53: Create Hosted Zone: If not already created, set up a hosted zone for your domain in Route 53.
  3. Enable Weighted Routing: In the Route 53 console, create a new record set for your domain (e.g., example.com) with weighted routing policy.
๐˜๐จ๐ฎ ๐ก๐š๐ฏ๐ž ๐ฎ๐ฌ๐ž๐ซ๐ฌ ๐ข๐ง ๐„๐ฎ๐ซ๐จ๐ฉ๐ž ๐š๐ง๐ ๐๐จ๐ซ๐ญ๐ก ๐€๐ฆ๐ž๐ซ๐ข๐œ๐š ๐ฐ๐ก๐จ ๐ง๐ž๐ž๐ ๐ญ๐จ ๐š๐œ๐œ๐ž๐ฌ๐ฌ ๐ซ๐ž๐ ๐ข๐จ๐ง-๐ฌ๐ฉ๐ž๐œ๐ข๐Ÿ๐ข๐œ ๐œ๐จ๐ง๐ญ๐ž๐ง๐ญ. ๐‡๐จ๐ฐ ๐๐จ ๐ฒ๐จ๐ฎ ๐œ๐จ๐ง๐Ÿ๐ข๐ ๐ฎ๐ซ๐ž ๐‘๐จ๐ฎ๐ญ๐ž ๐Ÿ“๐Ÿ‘ ๐ญ๐จ ๐๐ข๐ซ๐ž๐œ๐ญ ๐ฎ๐ฌ๐ž๐ซ๐ฌ ๐ญ๐จ ๐ซ๐ž๐ ๐ข๐จ๐ง-๐ฌ๐ฉ๐ž๐œ๐ข๐Ÿ๐ข๐œ ๐ฌ๐ž๐ซ๐ฏ๐ž๐ซ๐ฌ?

To direct users in Europe and North America to region-specific servers, you can use the Geolocation Routing Policy in Amazon Route 53. This policy allows you to serve content based on the user’s geographic location.

๐˜๐จ๐ฎ๐ซ ๐š๐ฉ๐ฉ๐ฅ๐ข๐œ๐š๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐๐ž๐ฉ๐ฅ๐จ๐ฒ๐ž๐ ๐š๐œ๐ซ๐จ๐ฌ๐ฌ ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ž ๐ข๐ง๐ฌ๐ญ๐š๐ง๐œ๐ž๐ฌ. ๐‡๐จ๐ฐ ๐œ๐š๐ง ๐ฒ๐จ๐ฎ ๐œ๐จ๐ง๐Ÿ๐ข๐ ๐ฎ๐ซ๐ž ๐‘๐จ๐ฎ๐ญ๐ž ๐Ÿ“๐Ÿ‘ ๐ญ๐จ ๐ซ๐ž๐ญ๐ฎ๐ซ๐ง ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ž ๐ก๐ž๐š๐ฅ๐ญ๐ก๐ฒ ๐ž๐ง๐๐ฉ๐จ๐ข๐ง๐ญ๐ฌ ๐Ÿ๐จ๐ซ ๐ฅ๐จ๐š๐ ๐›๐š๐ฅ๐š๐ง๐œ๐ข๐ง๐ ?

To configure Amazon Route 53 to return multiple healthy endpoints for load balancing across multiple application instances, you can use multivalue answer routing. This allows Route 53 to serve multiple IP addresses or endpoints while performing health checks to ensure only healthy ones are returned.

  1. Deploy Your Application Across Instances: Deploy your application on multiple instances (e.g., EC2 instances or servers) across one or more regions.
  2. Configure Route 53: Ensure your domain is set up in a Route 53 hosted zone (e.g., example.com).
  3. Enable Health Checks: For each record, create and associate a health check. Monitor the instance or load balancer’s health (e.g., HTTP or TCP checks).

Tell me types of routing policies in route 53 ?

  1. Simple Routing Policy
  2. Weighted Routing Policy
  3. Latency Routing Policy
  4. Geolocation Routing Policy
  5. Failover Routing Policy
  6. Multivalue Answer Routing Policy

What is connection draining, and how does it work?

Connection draining is a feature in Elastic Load Balancer (ELB) that ensures ongoing requests are completed before terminating an instance from the load balancerโ€™s pool. It helps maintain smooth traffic flow and prevents abrupt disruptions during maintenance or instance termination.

Leave a Comment