Jekyll2022-08-21T11:53:22+08:00https://devopslife.io/feed.xmldevopslifeStraight to the point<br>UnnikrishnanResolve dns locally using google chrome2022-08-20T12:40:00+08:002022-08-20T00:12:40+08:00https://devopslife.io/resolve-dns-locally-with-google-chrome<p>It is quite easy to resolve a domain locally using the /etc/hosts file. Similar thing can be done by editing the hosts file in windows which is located at c:\Windows\System32\Drivers\etc\hosts. But this can be done only if we have Admin access. This article focus primarily on how to resolve the dns locally using google chrome browser when you are not an admin user.</p>
<p>The cool part is that this doesnt even need to install any chrome extensions as well. All we need to do is to start chrome with a flag. Examples as follows.</p>
<p>If you are in windows, either run the following command in run prompt (windows key + R ) or edit google chrome shortcut and append the switch there.</p>
<p><code class="language-plaintext highlighter-rouge">chrome.exe --host-resolver-rules="MAP devopslife.io 192.168.1.1"</code></p>
<p>If you are on Mac, you can invoke chrome from the terminal by calling the path along with switch. This path may slightly vary depends on the version. But it isnt really hard to find.</p>
<p><code class="language-plaintext highlighter-rouge">/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --host-resolver-rules="MAP devopslife.io 192.168.1.1"</code></p>
<p>If we want to supply multiple rules, it can be done by appending one after another separated with a comma.</p>
<p><code class="language-plaintext highlighter-rouge">chrome.exe --host-resolver-rules="MAP devopslife.io 192.168.1.1, MAP devopslife.com 192.168.1.2"</code></p>
<p>You can validate if the above switch have taken effect by going to this url.</p>
<p><code class="language-plaintext highlighter-rouge">chrome://version</code></p>
<p><img src="../assets/img/resolve_dns_with_google_chrome_version.png" alt="" /></p>
<p>If it is not showing there, is is because there might be some background process running for chrome. Make sure there are no google chrome process running before we start chrome with the switch. Task manager may comes handy for you to kill the process if you are in windows. If you are in linux, then you may use pkill.</p>
<h3 id="checkout-the-references">Checkout the references</h3>
<p><a href="https://datacadamia.com/web/browser/chrome#dns_resolver" target="_blank" rel="noopener noreferrer"> https://datacadamia.com/web/browser/chrome#dns_resolver</a></p>UnniIt is quite easy to resolve a domain locally using the /etc/hosts file. Similar thing can be done by editing the hosts file in windows which is located at c:\Windows\System32\Drivers\etc\hosts. But this can be done only if we have Admin access. This article focus primarily on how to resolve the dns locally using google chrome browser when you are not an admin user.Migrating the wordpress blog to jekyll in github pages2022-03-19T10:07:00+08:002022-03-19T00:10:07+08:00https://devopslife.io/migrating-the-wordpress-blog-to-jekyll-in-github-pages<p>I was not getting much time recently to maintain this blog. We all know how dangerous it is to keep something in internet unmaintained amidst the new vulnerabilities which are coming up day by day. The same applies to wordpress as it have multiple third party plugins as well which may open doors to hackers unless we update it with all patches. I read an article in Linkedin recently about jekyll which is a static site generator based on ruby. The good thing about jekyll is it can be hosted in github pages for free.</p>
<p>This is not a detailed step by step guide. But I am trying to cover the process in a high level.</p>
<p>I have used a wordpress plugin “Jekyll Exporter” to migrate the wordpress posts to the jekyll specific format. Installing the plugin will give an option to “export to jekyll” in the tools menu.</p>
<p><img src="/assets/img/jekyll_exporter_in_wordpress.png" alt="" /></p>
<p>But it was not working for me due to some reason. but there was another way using the wp cli which helped me to achieve this. Try this way if you are also facing the issue to export it using wp admin portal.</p>
<p><code class="language-plaintext highlighter-rouge">wp jekyll-export > export.zip</code></p>
<p>The zip file contains a folder structure which we will use later.</p>
<ol>
<li>Create a github repo with the naming convention your-github-username.github.io</li>
<li>Clone this repo to your local</li>
<li>gem install bundler jekyll</li>
<li>jekyll new your-github-username.github.io</li>
</ol>
<p>replace the content of our cloned git with the directory we just created using step above.</p>
<p>This way we created a basic jekyll website. If you want to see how it looks like locally, we can use this command</p>
<p><code class="language-plaintext highlighter-rouge">bundle exec jekyll serve</code></p>
<p>If you are using Ruby version 3.0.0 or higher, above command may fail. This can be fixed by adding webrick to your dependencies:</p>
<p><code class="language-plaintext highlighter-rouge">bundle add webrick</code></p>
<p>If it complains for any gems, just install it using gem install command.</p>
<p>compare the <code class="language-plaintext highlighter-rouge">_config</code> file in the backup and newly created folders and correct fields accordinly. It is somewhat similar to the wp-config.php file. If we are adding any new gems , we have to add it to the Gemfile as well.</p>
<p>Now, we can just swap the <code class="language-plaintext highlighter-rouge">_posts</code> folder in the newly created jekyll folder with the one from the export we got from wordpress and we are done.</p>
<h3 id="challenges-and-workarounds">Challenges and workarounds</h3>
<ol>
<li>
<p>The wordpress theme will not be same and so the blog may look different.</p>
<p>–> You can playaround using the main.css or find a similar jekyll theme to make it look similar. Check the respective jekyll theme documentation to find the customizations and css path. Either add the theme to your repo or use a remote_theme. More details on all these are available in the official documentation.</p>
</li>
<li>
<p>Broken permalinks which can impact SEO score.
–> htaccess is not supported in github pages but we can do meta refresh. I found the jekyll plugin “jekyll-redirect-from” very useful to deal this.</p>
<p>–> Use jekyll-sitemap plugin for creating a sitemap.Have a look at jekyll-seo-tag plugin as well.</p>
</li>
<li>
<p>ruby gem dependencies.
–> this is where I spend lot of my time setting this up. Make sure to install the required ruby gems and add that to the Gemfile.</p>
</li>
</ol>
<h3 id="more-customizations">More customizations</h3>
<ol>
<li>Possible to add a custom domain name in the github settings.</li>
<li>Try different themes available in the github settings itself and the external ones.</li>
<li>Lots of plugins are also available for jekyll however dont expect it as vast as wordpress plugins.</li>
</ol>
<h3 id="checkout-the-references">Checkout the references</h3>
<p><a href="https://jekyllrb.com/docs/" target="_blank" rel="noopener noreferrer"> https://jekyllrb.com/docs/</a></p>
<p><a href="https://wordpress.org/plugins/jekyll-exporter/" target="_blank" rel="noopener noreferrer">https://wordpress.org/plugins/jekyll-exporter/</a>.</p>
<p><a href="https://www.bawbgale.com/from-wordpress-to-jekyll/" target="_blank" rel="noopener noreferrer">https://www.bawbgale.com/from-wordpress-to-jekyll/</a></p>UnniI was not getting much time recently to maintain this blog. We all know how dangerous it is to keep something in internet unmaintained amidst the new vulnerabilities which are coming up day by day. The same applies to wordpress as it have multiple third party plugins as well which may open doors to hackers unless we update it with all patches. I read an article in Linkedin recently about jekyll which is a static site generator based on ruby. The good thing about jekyll is it can be hosted in github pages for free.Recovering SSH public key with the private key2020-05-22T13:24:25+08:002020-05-22T13:24:25+08:00https://devopslife.io/recovering-ssh-public-key-with-the-private-key<p>I recently came across this situation by which the public SSH key of a server is lost and I was instructed to add the public key to other server’s authorized_hosts file to enable password less SSH authentication. However I was not allowed to create a new keypair as the old key could be in place in multiple places. This ssh-keygen command was a lifesaver.</p>
<p>This command will recover the public key if you have the private key with you.</p>
<p><code class="language-plaintext highlighter-rouge">ssh-keygen -y -f id_rsa_private_key_file > publickey.pub</code></p>
<p>Please let me know via comments if you are having trouble with this command.</p>UnniI recently came across this situation by which the public SSH key of a server is lost and I was instructed to add the public key to other server’s authorized_hosts file to enable password less SSH authentication. However I was not allowed to create a new keypair as the old key could be in place in multiple places. This ssh-keygen command was a lifesaver.Managing AWS SimpleAD from Linux2019-12-08T00:05:25+08:002019-12-08T00:05:25+08:00https://devopslife.io/managing-aws-simplead-from-linux<p>SimpleAD is a managed directory service that is powered by a Samba 4 Active Directory Compatible Server. User accounts can be created in SimpleAD to access AWS applications such as AWS Client VPN, Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail.</p>
<p>I have used this service for user authentication in Client VPN. One of the challenges that we faced is that the user management in SimpleAD was very biased to Windows OS and not linux. It was not a good idea to manage a Windows server just to manage users where as all the other applications are running in Linux. After some googling, I came to know about some tools which can be used to manage users in SimpleAD. But none of them are complete or easy to understand. This inspired me to write a post on the same.</p>
<p>Install the packages samba-common, adcli on the Linux OS by which you are trying to manage the AD.</p>
<p><code class="language-plaintext highlighter-rouge">apt-get install -y adcli samba-common</code></p>
<p>Take note of the Directory domain name and the DNS servers from the AWS SimpleAD console UI. The below example assumes username is the user that we are administering, “password” is the password, vpn.example.com is the directory domain and 192.168.1.2, 192.168.1.3 as the DNS servers for the directory</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "nameserver 192.168.1.2" > /etc/resolv.conf
echo "nameserver 192.168.1.3" >> /etc/resolv.conf
</code></pre></div></div>
<p><strong>Create User</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "password"|adcli create-user username --domain=vpn.example.com --display-name="User FullName" --stdin-password
</code></pre></div></div>
<p><strong>Delete User</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "password"|adcli delete-user username --domain=vpn.example.com --stdin-password
</code></pre></div></div>
<p><strong>List users</strong></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>net ads user -S vpn.example.com
</code></pre></div></div>
<p>More adcli commands can be found <a href="http://manpages.ubuntu.com/manpages/cosmic/man8/adcli.8.html">here</a></p>UnniSimpleAD is a managed directory service that is powered by a Samba 4 Active Directory Compatible Server. User accounts can be created in SimpleAD to access AWS applications such as AWS Client VPN, Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail.Testing cloudwatch alarm using AWS CLI2019-06-07T15:22:00+08:002019-06-07T15:22:00+08:00https://devopslife.io/testing-cloudwatch-alarm-using-aws-cli<p>Many of us are using Cloudwatch alarms for triggering some action. It could be an SNS or a lambda function etc. We can use this AWS CLI command to temporarily set cloudwatch alarm state for testing purposes.</p>
<p>We can change the state of the alarm “MyalarmName” to ALARM as follows.</p>
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="s">aws cloudwatch set-alarm-state –alarm-name MyalarmName –state-reason “testing alarm” –state-value ALARM</span>
</code></pre></div></div>
<p>The alarm returns to actual state usually within seconds.</p>UnniMany of us are using Cloudwatch alarms for triggering some action. It could be an SNS or a lambda function etc. We can use this AWS CLI command to temporarily set cloudwatch alarm state for testing purposes.Increase session duration of AWS CLI while assuming role2019-06-07T14:44:25+08:002019-06-07T14:44:25+08:00https://devopslife.io/increase-session-duration-of-aws-cli-while-assuming-role<p>This will be useful for you if you are using profile in aws CLI configuration files for switching roles with 2FA enabled. An example configuration as follows.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[default]
output = json
region = us-east-1
[profile staging]
role_arn = arn:aws:iam::1234567890:role/staging
mfa_serial = arn:aws:iam::12345678909:mfa/devopslife
source_profile = default
region = us-east-1
[profile production]
role_arn = arn:aws:iam::1234567123:role/production
mfa_serial = arn:aws:iam::12345678909:mfa/devopslife
source_profile = default
region = us-east-1
</code></pre></div></div>
<p>As per the example configuration above, we can execute AWS CLI commands in multiple AWS accounts by specifying the profile. I am not explaining the Role switching setup here. Consider the scenario, If you have a 2FA configured as mandatory while doing a role switch, we have to enter the 2FA token for running AWS commands every one hour even though the session duration set for the role is more than that. We can avoid this by appending the following parameter in the AWS config.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>duration_seconds = 43200
</code></pre></div></div>
<p>So, the whole code block will look like this</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[profile production]
role_arn = arn:aws:iam::1234567123:role/production
mfa_serial = arn:aws:iam::12345678909:mfa/devopslife
source_profile = default
region = us-east-1
duration_seconds = 43200
</code></pre></div></div>
<p>43200 seconds(12 hours) is the maximum that we can set . Make sure to adjust the role’s maximum session duration in IAM as well for this to work.</p>
<p><img src="../assets/img/Screenshot-2019-06-07-at-2.35.58-PM.png" alt="" /></p>
<p>We can verify by this by checking the expiration date in the aws cli cache JSON file which will be residing inside the .aws/cli/cache path.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>grep -o -P '.{0,1}Expiration.{0,25}' $(find ~/.aws/cli/cache -name "*.json"|tail -1)
"Expiration": "2019-06-03T13:58:09Z"
</code></pre></div></div>
<p>This parameter works well if you are <a href="https://devopslife.io/using-profiles-in-boto3/">using sessions in boto3</a> as well.</p>UnniThis will be useful for you if you are using profile in aws CLI configuration files for switching roles with 2FA enabled. An example configuration as follows.Redirect a route53 domain to another domain using S32019-02-25T23:31:16+08:002019-02-25T23:31:16+08:00https://devopslife.io/redirect-a-route53-domain-to-another-domain-using-s3<p>If you recently migrated your domain & DNS records from some third party domain registrar to AWS Route 53, you might be searching a way to configure a simple redirect of the apex (root) domain to an external domain. Many companies used to buy all the popular TLDs of their domains to avoid cybersquatting. All of those domains will be configured with a simple redirect to the main domain. It was easy while using the domain registrar’s DNS service as we were able to configure the redirect there with ease.</p>
<p>But, when using Route53, there is no direct way to do this. We can make use of S3 service to do this. In this example, I am trying to redirect example.org to example.com. Assuming, we already have a hosted zone for example.com.</p>
<ul>
<li>Create an S3 bucket with the name of domain “example.org”</li>
<li>Please note that S3 bucket names must be globally unique. If the bucket name you need is already taken, you can’t use S3 for redirection and this documentation won’t be applicable for you. You may use other work arounds like redirection using a webserver in backend.</li>
<li>Go to properties and select “Static web hosting”</li>
<li>From the dropdown, select Redirect all requests to another host name.</li>
<li>Enter example.com here and protocol (HTTP or HTTPS) and save it</li>
<li>Go to Route53 and select the hosted zone for example.org</li>
<li>Create a record for example.org with the below values</li>
</ul>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Record Type: A – IPv4 address
Alias: Yes
Alias Target: Choose your S3 bucket under the heading "S3 Website Endpoints"
Routing Policy: Simple
Evaluate Health Target: No
</code></pre></div></div>
<p>That’s it. You might need to wait for some time for the DNS propagation. Normally, the redirect will be enabled quickly. If your bucket endpoint is not populating while creating the record, Please wait for some more minutes, refresh the page and try again.</p>UnniIf you recently migrated your domain & DNS records from some third party domain registrar to AWS Route 53, you might be searching a way to configure a simple redirect of the apex (root) domain to an external domain. Many companies used to buy all the popular TLDs of their domains to avoid cybersquatting. All of those domains will be configured with a simple redirect to the main domain. It was easy while using the domain registrar’s DNS service as we were able to configure the redirect there with ease. But, when using Route53, there is no direct way to do this. We can make use of S3 service to do this. In this example, I am trying to redirect example.org to example.com. Assuming, we already have a hosted zone for example.com.Find out which role is used when an AWS CLI command is called2019-02-19T19:11:28+08:002019-02-19T19:11:28+08:00https://devopslife.io/find-out-which-role-is-used-when-an-aws-cli-command-is-called<p>This is very useful if you are running an AWS command on an ec2 instance which is using an IAM role or instance profile and you would like to verify if it is using the intended role.</p>
<p><a href="https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html">aws sts get-caller-identity</a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws sts get-caller-identity
{
"Account": "0123456789",
"UserId": "ABCDxxx:i-abc123",
"Arn": "arn:aws:sts::0123456789:assumed-role/ECS_Opsworks_DefaultRole/i-abc123"
}
</code></pre></div></div>UnniThis is very useful if you are running an AWS command on an ec2 instance which is using an IAM role or instance profile and you would like to verify if it is using the intended role.Using profiles in Boto32019-02-16T03:09:15+08:002019-02-16T03:09:15+08:00https://devopslife.io/using-profiles-in-boto3<p>If you are a boto3 user and have multiple AWS profiles defined in your shell, I am sure that you have faced this issue at least once. Boto3 loads the default profile if you don’t specify explicitly. I am trying to explain how to specify AWS profiles while using boto3.</p>
<p>Lets say, we want to use the profile “dev”, We have the following ways in boto3.</p>
<h4 id="1-create-a-new-session-with-the-profile">1. Create a new session with the profile</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>dev = boto3.session.Session(profile_name='dev')
</code></pre></div></div>
<h4 id="2-change-the-profile-of-the-default-session-in-code">2. Change the profile of the default session in code</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>boto3.setup_default_session(profile_name='dev')
</code></pre></div></div>
<h4 id="3-change-the-profile-of-the-default-session-with-an-environment-variable">3. Change the profile of the default session with an environment variable</h4>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ AWS_PROFILE=dev python
</code></pre></div></div>
<p>We can also list the available profiles defined in our configuration</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>boto3.session.Session().available_profiles
</code></pre></div></div>
<p>Reference : <a href="https://stackoverflow.com/questions/33378422/how-to-choose-an-aws-profile-when-using-boto3-to-connect-to-cloudfront">https://stackoverflow.com/questions/33378422/how-to-choose-an-aws-profile-when-using-boto3-to-connect-to-cloudfront</a></p>UnniIf you are a boto3 user and have multiple AWS profiles defined in your shell, I am sure that you have faced this issue at least once. Boto3 loads the default profile if you don’t specify explicitly. I am trying to explain how to specify AWS profiles while using boto3.Get notified on AWS root account login2019-02-13T00:17:29+08:002019-02-13T00:17:29+08:00https://devopslife.io/get-notified-on-aws-root-account-login<p>The best practice always is to create IAM logins for each user in an organization rather than sharing the root account. Most of the companies are following this trend as per their compliance regulations. But what if the root account is compromised by some way?</p>
<p>Yes. That can happen. It is recommended to enable two-factor authentication for the root account and even for all the IAM users. But it will be a wise idea to get notified if someone logins to console or make some API calls using the root credentials so that we can act fast. This can be done by the following steps.</p>
<ol>
<li>Enable cloudtrail for all regions.</li>
<li>Create a cloudwatch rule to check for console login/API access for root user.</li>
<li>Enter the following as Event pattern</li>
</ol>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"detail-type": [
"AWS API Call via CloudTrail",
"AWS Console Sign In via CloudTrail"
],
"detail": {
"userIdentity": {
"type": [
"Root"
]
}
}
}
</code></pre></div></div>
<p>4. Select the target as an SNS topic for “Matched event” and select the Topic you are planning to subscribe to. (Assuming we have already created an SNS topic).</p>
<p><img src="../assets/img/rootlogin_alert_sns.png" alt="" /></p>
<p>This way, we will get notified when root user does something. If we want to get email, go to SNS and create an email subscriber.</p>
<p>The above is the basic way to check for root activity. But, we can tune this better by including AWS lambda here.</p>
<p><img src="../assets/img/flow_diagram-1.jpeg" alt="" /></p>
<p>AWS blog already have very detailed documentation on how to do this. So, I am not repeating that here. Please refer the link which includes a cloud formation template and lambda function which means you can simply spin up the stack in few minutes. I have used this personally and is working great.</p>
<p><a href="https://aws.amazon.com/blogs/mt/monitor-and-notify-on-aws-account-root-user-activity/">https://aws.amazon.com/blogs/mt/monitor-and-notify-on-aws-account-root-user-activity/</a></p>UnniThe best practice always is to create IAM logins for each user in an organization rather than sharing the root account. Most of the companies are following this trend as per their compliance regulations. But what if the root account is compromised by some way?