In this post I am going to describe how to maintain access to Amazon AWS EKS - kubernetes cluster with only attaching an IAM Role as an instance profile, without configuring access keys on the ec2 instance.

The benefit of course is not storing any Amazon IAM credentials on the ec2 instance. Having your infrastructure more secure.

In my use case, I wanted to have my Jenkins CI server have access to EKS cluster for adding continuance delivery using Jenkins declarative pipeline, to my continuance integration process. By doing that achieving full CI\CD process for all the microservices that Jenkins handled.

The entire process will be done using the awscli, but of course, can be done in the AWS Console.

Creating IAM Role for EC2 Instance

  • Create a file that will have the trust policy for EC2, this will allow the role to be attached to any ec2 instance.
cat >> assume-policy.json <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": { "Service": ""},
      "Action": "sts:AssumeRole"

*Create the role

aws iam create-role --role-name full-eks-access-role \
  --description "Accessing all of account EKS cluster API endpoints" \
  --assume-role-policy-document file://./assume-policy.json

Make sure the keep the Arn in the once the result return from command, we will use it to configure the access in kubernetes.

Creating IAM Policy for EKS full access to attach IAM role that we’ve create

*Create a policy file for accessing EKS

This is a full access to any EKS cluster in the account, but can be more strict if you like.

cat >> eks-full.json <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "*"

*Create the policy using the eks-full.json we created

aws iam create-policy --policy-name eks-full-access \
  --description "EKS Full access policy" \
  --policy-document file://./eks-full.json

Make sure to have the ARN created

*Attach the policy to the IAM Role created

aws iam attach-role-policy --role-name full-eks-access-role \
  --policy-arn arn:aws:iam::00000000000:policy/eks-full-access

Don’t copy and paste exactly the command from here, copy it from the command prompt, the policy and roles will have your aws account number

Create the EC2 Instance profile with the role we have created

*Create an Instance profile

aws iam create-instance-profile --instance-profile-name Jenkins

*Add the role we created to the ec2 instance profile

aws iam add-role-to-instance-profile --role-name full-eks-access-role --instance-profile-name Jenkins

*Attached the IAM Role to the ec2 instance that needs access to the EKS

aws ec2 associate-iam-instance-profile \
  --instance-id i-09eef2945b7c4c39e \
  --iam-instance-profile Name=Jenkins \
  --region eu-west-2

Configure aws-auth access with the IAM role

If you have an EKS cluster, you probably familiar with aws-auth.yaml file for managing cluster authentication. This yaml file contains the ec2 nodes and iam users that are allowed to access the kubernetes api server.

To simplify:

It means that every kubectl command passes its AWS identity that was used with the command to the cluster, it first validate that the identity has access to it (this is the Role we have created), than it check the identity in cluster RBAC using the aws-auth file.

*Retrieve and edit aws-auth file

Amazon has well documented explanation about the files and retrieving it from their storage. Download the file or edit your existing one, that probably holds the role of the EKS nodes that you configured in the past. (it must have the eks nodes role for joining the cluster)

If you have this configMap applied to the cluster, edit the existing one without deleting anything in it. this might cause a lockout except the original identity that have created the cluster.

If you didn’t configured it yet (new cluster) download and edit the file

curl -o aws-auth-cm.yaml

Edit the file with the role created

apiVersion: v1
kind: ConfigMap
  name: aws-auth
  namespace: kube-system
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::00000000000:role/full-eks-access-role
      username: jenkins
        - system:masters    

Apply the file from your station kubectl apply -f ./aws-auth-cm.yaml

Configuring the ec2 instance

Now that we have a role setup and the instance profile is attached to the ec2, connect to the ec2 instance.

Install the following: pip, awscli, kubectl and aws-iam-authenticator

# pip
curl | sudo python3

# awscli
sudo pip install --upgrade awscli

# kubectl - with an install script I made
curl | sudo bash

# aws-iam-authenticator
curl -o aws-iam-authenticator
chmod +x ./aws-iam-authenticator
sudo mv ./aws-iam-authenticator /usr/local/bin/

Create kube config using awscli and verify access

aws eks --region <your region> update-kubeconfig --name <eks cluster name>

Now that all is set, lets check access to the eks cluster

kubectl get node

Your done!

I hope that this post was helpful, feel free to comment, ask and of course, share :)