SSH to EC2 instances via PuTTY from your local machine
Step 1: We earlier created a key-pair in “creating the EC2 instances tutorial” so that we can connect to the server from SSH clients like “putty.exe” to install the required applications like Java, Hadoop eco systems, Springboot microservices, Cloudera manager, etc. We have already downloaded the private key from the EC2 instance named “cloudera-instances-private-key.pem“.
Step 2: You need to have PuTTY.exe & PuTTYgen.exe installed.
Step 3: PuTTY.exe requires the private key to be in .ppk (i.e. Putty Private Key) format. So we will have to convert from “cloudera-instances-private-key.pem” to a “cloudera-instances-private-key.ppk” file using the “cloudera-instances-private-key.pem.exe” program.
Step 4: Open PuTTYgen.exe, and
1. load the cloudera-instances-private-key.pem file downloaded from AWS EC2 instance.
2. Save the private key as “cloudera-instances-private-key.ppk”.
Step 5: Open PuTTY.exe, and create 3 client configurations and save them for Master, Slave01, and Slave02.
You need to use the public ip addresses as you will be accessing from outside the AWS cloud as per the diagram shown below.
Step 6: You must also load & save the “cloudera-instances-private-key.ppk” to be able to connect to AWS EC2 instances without having to enter any passwords.
Note: Save the session by clicking on session -> Save.
Step 7: Now you can double click on the saved instance “cloudera-hadoop-slave01” or select it and click on “load” and then “open” a session to “Slave01” AWS EC2 instance.
Say “yes” to the pop-up alert.
login as: ubuntu
Now you are connected to the AWS EC2 instance: “Slave01”
SSH from one AWS EC2 instance to another
Once you are in the AWS cloud, you can use the private ip addresses to SSH between different EC2 instances.
Step 1: Open three putty instances for “Master”, “Slave01”, and “Slave02”.
Step 2: Firstly, let’s create FQDN for the private IP addresses. Here are the private ip addresses and private FQDN for EC2 instances master, slave01, and slave02 respectively.
Step 3: Add the above lines to the /etc/hosts file on all three AWS EC2 instances.
ubuntu@ip-172-31-6-197:~$ sudo vi /etc/hosts
ubuntu@ip-172-31-6-197:~$ ping ip-172-31-6-197.ap-southeast-2.compute.internal
Step 4: Install openssh-server & openssh-client on master.
ubuntu@ip-172-31-6-197:~$ sudo apt-get install openssh-server openssh-client
Step 5: Generate key pairs.
ubuntu@ip-172-31-6-197:~$ ssh-keygen -t rsa
just click enter for all the prompts. This creates public & private keys in the “.ssh” folder.
ubuntu@ip-172-31-6-197:~$ ls -ltr .ssh
-rw------- 1 ubuntu ubuntu 412 Mar 3 10:11 authorized_keys
-rw-r--r-- 1 ubuntu ubuntu 404 Mar 3 17:58 id_rsa.pub
-rw------- 1 ubuntu ubuntu 1675 Mar 3 17:58 id_rsa
Step 6: Copy the contents of the public key id_rsa.pub to .ssh/authorized_keys on all three EC2 instances.
Do the same to “Slave01” & “Slave02”. Now you can ssh to say “slave01” or “”slave02 from master without having to enter the password.
ubuntu@ip-172-31-6-197:~$ ssh ip-172-31-1-241.ap-southeast-2.compute.internal
ubuntu@ip-172-31-6-197:~$ ssh ip-172-31-1-141.ap-southeast-2.compute.internal
Repeat the steps 5 to 6 for slaves ip-172-31-1-241.ap-southeast-2.compute.internal & ip-172-31-1-141.ap-southeast-2.compute.internal so that you can ssh without password from one slave to the master or the other slave.
This setup to be able to connect without password is key to installing say Cloudera manager on this 3 node cluster with one master and 2 slaves. You can ssh from master to one of the slaves and from a slave to the master or other slaves.
We will install “AWS cli” in the next tutorial to interact with AWS via command-line.