S3
We can store unlimited data into s3 but maximum size if a file at a time is 5TB
By default s3 is storing the data is S3 standard zone which having 9 copy
That 9 copy region will decide by aws, form where the more requests are come it will store to that region only
Storage class in S3
- Standard zone(SSD-9 copy)
- Standard infrequently Accessed (SSD-6 copy)
- Reduced redundancy standard(SSD-2 copy)
- Glacier (Tape – 1copy)
S3 Standard: Ideal for frequently accessed data. It provides high durability, availability, and performance object storage.
S3 Intelligent-Tiering: Automatically moves data between two access tiers when access patterns change, offering cost savings without performance impact.
S3 Standard-IA (Infrequent Access): Designed for data that is accessed less frequently, but requires rapid access when needed. It offers lower storage costs and higher retrieval costs compared to S3 Standard.
S3 One Zone-IA: For infrequently accessed data that does not require multiple availability zone resilience. Itโs more cost-effective than Standard-IA but less durable.
S3 Glacier: Used for data archiving with retrieval times ranging from minutes to hours. Itโs a low-cost storage class for long-term data.
S3 Glacier Deep Archive: The lowest-cost storage class, ideal for long-term data that is rarely accessed, with retrieval times of 12 hours or more.
CDN
Cloudfront: Amazon CloudFront is a Content Delivery Network (CDN) service that securely delivers content (web pages, videos, APIs, etc.) with low latency and high transfer speeds. It uses a network of Edge Locations to cache and serve content closer to users globally.
Edge Location: An Edge Location is a physical data center in the AWS global network where CloudFront caches content. These are positioned globally to deliver content faster to end-users.
How many SG can be assigned to a EC2?
An EC2 instance in AWS can have a maximum of 5 Security Groups assigned to it at any given time.
How many rules can be added to SG?
Inbound Rules: Maximum of 60 rules. Outbound Rules: Maximum of 60 rules.
What is instance profile?
An instance profile in AWS (Amazon Web Services) is a container for an IAM (Identity and Access Management) role that you can use to assign permissions to an EC2 instance. This allows your EC2 instances to make API requests to AWS services on your behalf.
- IAM Role: Defines a set of permissions and can be assumed by trusted entities, like an EC2 instance. we can use it for any service
- Instance Profile: Contains the role and can be attached to an EC2 instance. it is only applicable for ec2
What is inline policy in aws ?
In AWS, an inline policy is a policy that’s created and embedded directly into a user, group, or role. when we are creating user that directly we can attach it.
What is the main difference between cloudtrail and cloudwatch ?
- CloudTrail: Tracks and records API activity and user actions across your AWS account, we use cloudtrail for AWS account monitoring and logging
- Use Case: Helps with compliance, auditing, and governance by providing a history of AWS API calls.
- Data: Logs information about who performed what actions, from where, and when (e.g., creating a resource, updating configurations).
- CloudWatch: Monitors performance metrics and operational data for AWS resources and applications.
- Use Case: Helps with real-time monitoring, alerting, and visualization of resource usage and application performance.
- Data: Tracks metrics like CPU usage, memory, disk I/O, latency, and application logs.
When to use AWS Fargate and Amazon EC2 in ECS or EKS ?
Why Fargate?
Use Case: A startup wants to deploy a microservices-based e-commerce application with unpredictable traffic patterns.
- Why Fargate?
- The startup doesn’t want to manage infrastructure or worry about scaling.
- Traffic can spike during sales events, and Fargate’s auto-scaling ensures seamless handling of the load.
- Each microservice has different resource needs, and Fargate allows specifying CPU and memory per task, optimizing costs.
Scenario 2: Using EC2
Use Case: A large enterprise runs a machine-learning workload that processes data for their analytics platform.
- Why EC2?
- The workload requires GPU instances for computation, which Fargate doesnโt support.
- The team has predictable, sustained usage, making Reserved Instances or Spot Instances a cost-effective option.
- They need to install specific libraries and software that require direct access to the host OS.
NOTE: as of now, AWS Fargate does not support GPU instances
How can a ec2 private ip can communicate with s3 ?
An EC2 instance with a private IP address in a private subnet can communicate with Amazon S3 without using the public internet by leveraging VPC Endpoints or through a NAT Gateway. Using VPC endpoint we can access the service using private IP
Create a VPC Gateway Endpoint:
- Go to the VPC Console.
- Navigate to Endpoints and click Create Endpoint.
- Choose the S3 service:
com.amazonaws.<region>.s3
. - Select the Gateway type.
- Choose your VPC and associate the route table(s) of the private subnet where your EC2 instance resides
Cloudfront
Amazon CloudFront is a fast content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers worldwide with low latency and high transfer speeds. It uses a global network of edge locations to cache and distribute content, enhancing performance, scalability, and security for your applications and websites. CloudFront provides SSL/TLS encryption by default, ensuring secure communication with minimal setup.
Steps to host a static website in s3 ?
- Click on Create bucket.
- Click Upload and add your website files (e.g.,
index.html
,style.css
, images). and upload it - In your bucket, go to the Properties tab.
- Scroll down to the Static website hosting section.
- Select Enable.
- For Index document, enter
index.html
. - For Error document (optional), you can enter
error.html
. - Set Bucket Policy to Allow Public Read Access
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-website-bucket/*"
}
]
}
What is read replicas in RDS ?
Read Replicas in Amazon RDS are copies of your primary database instance that are used to offload read traffic and improve the performance and scalability of your application. They are particularly useful for read-heavy workloads where the primary instance is under significant read load.
- Offload Read Traffic: Read replicas handle read queries, freeing up the primary database instance for write operations.
- Asynchronous Replication: Changes made to the primary database are asynchronously replicated to the read replicas, ensuring minimal impact on write performance.
- Scalability: Multiple read replicas can be created (up to 5 per primary instance for most RDS engines) to distribute the read load across multiple instances.
- High Availability: While not a replacement for Multi-AZ deployments, read replicas can improve availability by allowing reads even if the primary instance experiences downtime.
You have a production database and it is hosted on RDS, it is experiencing high latency that is impacting the application performance, how to troubleshoot this issue ?
I will first go to the RDC cloudwatch to see the matrics and dashboard to identify CPU memory and I/o bottleneck. Then i will analyse the slow query logs to identify inefficient queries and i will try to optimize them. Then we can consider to scaling the RDC instance to a higher tier or changing storage type based on the resource utilization and the timing and also we can use RDS replica to distribute read load and overall performance. for future purpose we can set the alert and policy in cloudwatch.
How you can handle RDS scaling ?
- Using Read Replicas: Create one or more read replicas to handle read-heavy workloads.
- Configure your application to route read queries to replicas and write queries to the primary database.
- Storage Scaling: RDS supports storage autoscaling for MySQL, MariaDB, PostgreSQL, and Oracle.
- Enable the storage autoscaling option in the RDS console and set a maximum theresold
What metrics are available on your CloudWatch dashboard?
A CloudWatch dashboard can display various metrics for AWS resources and custom applications. These metrics help monitor the performance, health, and availability of your infrastructure and applications.
1. Common Metrics for AWS Services
a. EC2 Instances:
- CPU Utilization (%): Tracks the percentage of CPU used.
- Memory Usage (Custom): Requires an agent to monitor memory utilization.
- Disk Read/Write Operations: Monitors I/O activity.
- Network In/Out (Bytes): Tracks data transfer to and from the instance.
- Status Checks: Reports the health of instance-level and system-level checks.
b. Amazon S3:
- Number of Objects: Tracks the count of objects stored in a bucket.
- Bucket Size (Bytes): Monitors storage usage.
- Get/Put/Delete Requests: Tracks operations on the bucket.
c. Elastic Load Balancer (ELB):
- Request Count: Number of requests handled by the ELB.
- Healthy/Unhealthy Host Count: Monitors backend instance health.
- Latency: Measures request-response time.
- HTTP 4XX/5XX Errors: Tracks client and server errors.
d. Amazon RDS:
- CPU Utilization (%): Tracks database instance load.
- Free Storage Space (GB): Monitors available storage.
- Read/Write IOPS: Tracks database read/write operations.
- Database Connections: Monitors the number of open connections.
e. Amazon EKS (Kubernetes):
- Node CPU/Memory Utilization (Custom): Tracks usage on worker nodes.
- Pod Count: Number of active pods.
- API Server Latency: Measures response time for API requests.
- Disk Space (Custom): Monitors disk usage on nodes.
How do you attach an SSL certificate to an S3 bucket?
You cannot attach an SSL certificate directly to an S3 bucket because S3 itself does not natively support SSL for custom domains. However, you can serve an S3 bucket’s content securely using HTTPS by integrating it with Amazon CloudFront, which allows you to attach an SSL certificate to your custom domain.
- Prepare Your S3 Bucket: Ensure your S3 bucket is configured to host a static website or serve content publicly.
- Static Website Hosting: In the S3 bucket settings, enable static website hosting if hosting a website.
- Request or Import an SSL Certificate: Enter your custom domain name (e.g.,
www.example.com
) and validate ownership using DNS or email. - Create a CloudFront Distribution: Navigate to the CloudFront Console and create a new distribution.
- Distribution Settings: Enter your S3 bucket’s URL (use the S3 website endpoint if static hosting is enabled).
- Viewer Protocol Policy: Set it to Redirect HTTP to HTTPS or HTTPS Only to enforce secure connections.
- Custom SSL Certificate: Select the SSL certificate you created or imported in ACM.
What is the maximum runtime for a Lambda function?
The maximum runtime for an AWS Lambda function is 15 minutes (900 seconds).
What is the maximum memory size for a Lambda function?
The maximum memory size for an AWS Lambda function is 10 GB (10,240 MB).
Which aws service we can use to migrate on premise to aws ?
AWS Migration Hub: This service provides a central location to track the progress of your migrations across multiple AWS and partner solutions
You want to deploy a highly available web application across multiple aws regions. what AWS service can help you with this ?
AWS Global Accelerator: This service improves the availability and performance of your applications by using the AWS global network. It can direct traffic to the optimal endpoint based on performance metrics and health checks.
Can we edit CIDR range in VPC ?
No, you cannot edit the CIDR range of an existing VPC in AWS. Once a VPC is created, its CIDR range is fixed and cannot be modified. However, there are a few workarounds depending on your requirements:
- Add a Secondary CIDR Block: AWS allows you to add a secondary CIDR block to an existing VPC. This effectively extends the IP address range of your VPC without disrupting the current resources.
- Migrate Resources to a New VPC: If you need to completely change the CIDR range, you must create a new VPC and migrate resources.
- VPC Peering: If adding a secondary CIDR or migrating resources is not feasible, you can use VPC peering to connect the existing VPC with another VPC that has the desired CIDR range.
What is private link ?
AWS PrivateLink enables private and secure connectivity between VPCs, AWS services, and on-premises systems without using the public internet. It uses VPC Endpoints to route traffic through the AWS backbone network, ensuring low latency and enhanced security.
In our client project, the resources are distributed across two regions. Now, the client wants to ensure that resources cannot be deployed in any other region. How can this be achieved in AWS?
- Create an AWS Organization in the Management Account.
- Add member accounts to the organization.
- Create a Service Control Policy (SCP)
- Go to the Policies section and click Create policy.
- Use the following JSON to restrict deployments to specific regions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowActionsInSpecificRegions",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-west-1"
]
}
}
}
]
}
What is the role of a route table in VPC?
A route table in an AWS VPC (Virtual Private Cloud) controls the traffic routing within the VPC, allowing resources to communicate with each other and with external networks, such as the internet.
Difference between iam policy and s3 custom policy ?
- IAM Policy: Use for granting permissions to AWS users, roles, or services across multiple resources (e.g., all S3 buckets in an account).
- S3 Bucket Policy: Use for managing access at the bucket level, especially for external users, cross-account access, or public access.
๐๐จ๐ฎ๐ซ ๐๐จ๐ฆ๐ฉ๐๐ง๐ฒ ๐ฐ๐๐ง๐ญ๐ฌ ๐ญ๐จ ๐ ๐ซ๐๐๐ฎ๐๐ฅ๐ฅ๐ฒ ๐ซ๐๐ฅ๐๐๐ฌ๐ ๐ ๐ง๐๐ฐ ๐๐๐๐ญ๐ฎ๐ซ๐ ๐ญ๐จ ๐๐% ๐จ๐ ๐ฎ๐ฌ๐๐ซ๐ฌ ๐ฐ๐ก๐ข๐ฅ๐ ๐ญ๐ก๐ ๐ซ๐๐ฆ๐๐ข๐ง๐ข๐ง๐ ๐๐% ๐๐๐๐๐ฌ๐ฌ ๐ญ๐ก๐ ๐จ๐ฅ๐ ๐ฏ๐๐ซ๐ฌ๐ข๐จ๐ง. ๐๐จ๐ฐ ๐๐จ ๐ฒ๐จ๐ฎ ๐ข๐ฆ๐ฉ๐ฅ๐๐ฆ๐๐ง๐ญ ๐ญ๐ก๐ข๐ฌ ๐ข๐ง ๐๐จ๐ฎ๐ญ๐ ๐๐?
Implementing a gradual release of a new feature to 30% of users while the remaining 70% access the old version in Route 53 involves setting up weighted routing. Here’s how you can achieve this:
- Create Two Versions: Ensure you have two versions of your application running. One with the new feature and one with the old feature. Let’s call them
version-new
andversion-old
. - Set Up Route 53: Create Hosted Zone: If not already created, set up a hosted zone for your domain in Route 53.
- Enable Weighted Routing: In the Route 53 console, create a new record set for your domain (e.g.,
example.com
) with weighted routing policy.
๐๐จ๐ฎ ๐ก๐๐ฏ๐ ๐ฎ๐ฌ๐๐ซ๐ฌ ๐ข๐ง ๐๐ฎ๐ซ๐จ๐ฉ๐ ๐๐ง๐ ๐๐จ๐ซ๐ญ๐ก ๐๐ฆ๐๐ซ๐ข๐๐ ๐ฐ๐ก๐จ ๐ง๐๐๐ ๐ญ๐จ ๐๐๐๐๐ฌ๐ฌ ๐ซ๐๐ ๐ข๐จ๐ง-๐ฌ๐ฉ๐๐๐ข๐๐ข๐ ๐๐จ๐ง๐ญ๐๐ง๐ญ. ๐๐จ๐ฐ ๐๐จ ๐ฒ๐จ๐ฎ ๐๐จ๐ง๐๐ข๐ ๐ฎ๐ซ๐ ๐๐จ๐ฎ๐ญ๐ ๐๐ ๐ญ๐จ ๐๐ข๐ซ๐๐๐ญ ๐ฎ๐ฌ๐๐ซ๐ฌ ๐ญ๐จ ๐ซ๐๐ ๐ข๐จ๐ง-๐ฌ๐ฉ๐๐๐ข๐๐ข๐ ๐ฌ๐๐ซ๐ฏ๐๐ซ๐ฌ?
To direct users in Europe and North America to region-specific servers, you can use the Geolocation Routing Policy in Amazon Route 53. This policy allows you to serve content based on the user’s geographic location.
๐๐จ๐ฎ๐ซ ๐๐ฉ๐ฉ๐ฅ๐ข๐๐๐ญ๐ข๐จ๐ง ๐ข๐ฌ ๐๐๐ฉ๐ฅ๐จ๐ฒ๐๐ ๐๐๐ซ๐จ๐ฌ๐ฌ ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ ๐ข๐ง๐ฌ๐ญ๐๐ง๐๐๐ฌ. ๐๐จ๐ฐ ๐๐๐ง ๐ฒ๐จ๐ฎ ๐๐จ๐ง๐๐ข๐ ๐ฎ๐ซ๐ ๐๐จ๐ฎ๐ญ๐ ๐๐ ๐ญ๐จ ๐ซ๐๐ญ๐ฎ๐ซ๐ง ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ ๐ก๐๐๐ฅ๐ญ๐ก๐ฒ ๐๐ง๐๐ฉ๐จ๐ข๐ง๐ญ๐ฌ ๐๐จ๐ซ ๐ฅ๐จ๐๐ ๐๐๐ฅ๐๐ง๐๐ข๐ง๐ ?
To configure Amazon Route 53 to return multiple healthy endpoints for load balancing across multiple application instances, you can use multivalue answer routing. This allows Route 53 to serve multiple IP addresses or endpoints while performing health checks to ensure only healthy ones are returned.
- Deploy Your Application Across Instances: Deploy your application on multiple instances (e.g., EC2 instances or servers) across one or more regions.
- Configure Route 53: Ensure your domain is set up in a Route 53 hosted zone (e.g.,
example.com
). - Enable Health Checks: For each record, create and associate a health check. Monitor the instance or load balancer’s health (e.g., HTTP or TCP checks).
Tell me types of routing policies in route 53 ?
- Simple Routing Policy
- Weighted Routing Policy
- Latency Routing Policy
- Geolocation Routing Policy
- Failover Routing Policy
- Multivalue Answer Routing Policy
What is connection draining, and how does it work?
Connection draining is a feature in Elastic Load Balancer (ELB) that ensures ongoing requests are completed before terminating an instance from the load balancerโs pool. It helps maintain smooth traffic flow and prevents abrupt disruptions during maintenance or instance termination.