If you recently migrated your domain & DNS records from some third party domain registrar to AWS Route 53, you might be searching a way to configure a simple redirect of the apex (root) domain to an external domain. Many companies used to buy all the popular TLDs of their domains to avoid cybersquatting. All of those domains will be configured with a simple redirect to the main domain. It was easy while using the domain registrar’s DNS service as we were able to configure the redirect there with ease.
But, when using Route53, there is no direct way to do this. We can make use of S3 service to do this. In this example, I am trying to redirect example.org to example.com. Assuming, we already have a hosted zone for example.com.
Create an S3 bucket with the name of domain “example.org”
Please note that S3 bucket names must be globally unique. If the bucket name you need is already taken, you can’t use S3 for redirection and this documentation won’t be applicable for you. You may use other work arounds like redirection using a webserver in backend.
Go to properties and select “Static web hosting”
From the dropdown, select Redirect all requests to another host name.
Enter example.com here and protocol (HTTP or HTTPS) and save it
Go to Route53 and select the hosted zone for example.org
Create a record for example.org with the below values
Record Type:A–IPv4 address
Alias Target:Choose your S3 bucket under the heading"S3 Website Endpoints"
Evaluate Health Target:No
That’s it. You might need to wait for some time for the DNS propagation. Normally, the redirect will be enabled quickly. If your bucket endpoint is not populating while creating the record, Please wait for some more minutes, refresh the page and try again.
If you are a boto3 user and have multiple AWS profiles defined in your shell, I am sure that you have faced this issue at least once. Boto3 loads the default profile if you don’t specify explicitly. I am trying to explain how to specify AWS profiles while using boto3.
Lets say, we want to use the profile “dev”, We have the following ways in boto3.
1. Create a new session with the profile
2. Change the profile of the default session in code
3. Change the profile of the default session with an environment variable
We can also list the available profiles defined in our configuration
The best practice always is to create IAM logins for each user in an organization rather than sharing the root account. Most of the companies are following this trend as per their compliance regulations. But what if the root account is compromised by some way?
Yes. That can happen. It is recommended to enable two-factor authentication for the root account and even for all the IAM users. But it will be a wise idea to get notified if someone logins to console or make some API calls using the root credentials so that we can act fast. This can be done by the following steps.
Enable cloudtrail for all regions.
Create a cloudwatch rule to check for console login/API access for rootuser
Enter the following as Event pattern
"AWS API Call via CloudTrail",
"AWS Console Sign In via CloudTrail"
4. Select the target as an SNS topic for “Matched event” and select the Topic you are planning to subscribe to. (Assuming we have already created an SNS topic).
This way, we will get notified when root user does something. If we want to get email, go to SNS and create an email subscriber.
The above is the basic way to check for root activity. But, we can tune this better by including AWS lambda here.
AWS blog already have very detailed documentation on how to do this. So, I am not repeating that here. Please refer the link which includes a cloud formation template and lambda function which means you can simply spin up the stack in few minutes. I have used this personally and is working great.
I was totally unaware about the fact that even a master account doesn’t have all the privileges in an RDS database(MySQL) until I got stuck with this issue. Today, I was asked to create a secondary admin user in one of our production DB with all privileges. The MySQL DB instance was running in AWS RDS. I tried the following command
I got the above error while trying to grant all privileges. I was sure about the command because the same command was working fine for non-RDS mysql instances. Few minutes of googling has given me the fix.
mysql>GRANT ALL ON`%`.*TOadmin_sync@`%`;<p>Query OK,0rows affected(0.00sec)</p>
In order to protect the instance itself, RDS doesn’t allow even the master account to access to the mysql database. The mysql.* tables are considered off-limits here since I don’t have access to the mysql.* tables which are restricted by Amazon. I can’t grant permissions on *.* since that would match MySQL, and %.* appears to not match those system tables.
So, the quick fix is to use %.* instead of *.*.
The _ and % wildcards are permitted when specifying DB names in GRANT statements that grant privileges at the global or database levels.
This is something that I have came across while tuning an nginx server which has multiple tomcat instances as upstream. We were trying to adjust the read timeout of the upstream proxies. It is hard to simulate this by stopping the backend as it will throw a 503 bad gateway. So, for simulating this, we used a nodejs script.
console.log('Server running at http://'+hostname+':'+port+'/');
This was an issue I have faced while setting up this blog. I was getting 404 errors for all the post links in this blog when selecting the non default permalink structure with SSL.
First thing I tried was to regenerate the .htaccess file. Removed the existing .htaccess file in the WordPress root folder. Regenerated the file by switching the permalink again. That didnt worked for me. The fix was something with the web sever level. Finally, I found the fix.
The directory tag is required in ssl virtual host config of apache same as of http port 80, to allow override redirect rules using .htaccess of wordpress.
The Amazon ECS container agent allows container instances to connect to your cluster. If this agent is down for some reason, deployments to the service won’t be reflected in the instance and can cause discrepancy.
Here is a one-liner to check if ECS agent container is running. If it is not running, we are making use of AWS SNS service to send a notification to a topic.
if [ -z $(docker ps -f “name=ecs-agent” -f “status=running” -q) ]; then /usr/bin/aws –region=us-east-1 sns publish –topic-arn “arn:aws:sns:us-east-1:123456789012:Topicname” –message “ECS Agent is not running in $HOSTNAME.”; fi
Make sure that the instance role has permissions to publish to the required topic and the topic is already configured.
In some cases, we might need to throw a custom/different error code for a specific issue. For example, we can throw a different error to the end user even if the backend node is down. We can do that in nginx as in the example below.