Overview

In this post, I am going to describe how to maintain access to Amazon AWS EKS - Kubernetes cluster with only attaching an IAM Role as an instance profile, without configuring access keys on the EC2 instance.

The benefit of course is not storing any Amazon IAM credentials on the EC2 instance, having your infrastructure more secure.

Use Case: Jenkins CI/CD Pipeline

In my use case, I wanted to have my Jenkins CI server have access to EKS cluster for adding continuous delivery using Jenkins declarative pipeline to my continuous integration process. By doing that, I achieved a full CI/CD process for all the microservices that Jenkins handled.

This complements other CI/CD approaches like Lambda continuous delivery using Docker and Jenkins pipeline for serverless deployments.

The entire process will be done using the awscli, but of course, can be done in the AWS Console.

Prerequisites

Before starting, ensure you have:

  • An existing EKS cluster
  • AWS CLI configured with administrative permissions
  • Basic familiarity with IAM roles and Kubernetes
  • Container images stored in a registry (for AWS-native storage, see our guide on working with AWS ECR)

What You’ll Accomplish

By the end of this guide, you’ll have:

  • An IAM role with EKS access permissions
  • An EC2 instance profile attached to your instance
  • Configured EKS cluster authentication via aws-auth ConfigMap
  • A working kubectl setup that accesses EKS without stored credentials

Step-by-Step Implementation

Step 1: Creating IAM Role for EC2 Instance

  • Create a file that will have the trust policy for EC2, this will allow the role to be attached to any EC2 instance.
cat >> assume-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "ec2.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Create the role

aws iam create-role --role-name full-eks-access-role \
  --description "Accessing all of account EKS cluster API endpoints" \
  --assume-role-policy-document file://./assume-policy.json

Make sure to keep the ARN once the result returns from the command, we will use it to configure the access in Kubernetes.

Step 2: Creating IAM Policy for EKS Access

Create a policy file for accessing EKS

This provides full access to any EKS cluster in the account, but can be more strict if you like.

cat >> eks-full.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:*"
      ],
      "Resource": "*"
    }
  ]
}
EOF

Create the policy using the eks-full.json we created

aws iam create-policy --policy-name eks-full-access \
  --description "EKS Full access policy" \
  --policy-document file://./eks-full.json

Make sure to save the ARN that gets created

Attach the policy to the IAM Role created

aws iam attach-role-policy --role-name full-eks-access-role \
  --policy-arn arn:aws:iam::00000000000:policy/eks-full-access

Don’t copy and paste exactly the command from here. Copy it from the command prompt, as the policy and roles will have your AWS account number.

Step 3: Create EC2 Instance Profile

Create an Instance profile

aws iam create-instance-profile --instance-profile-name Jenkins

Add the role we created to the EC2 instance profile

aws iam add-role-to-instance-profile --role-name full-eks-access-role --instance-profile-name Jenkins

Attach the IAM Role to the EC2 instance that needs access to the EKS

aws ec2 associate-iam-instance-profile \
  --instance-id i-09eef2945b7c4c39e \
  --iam-instance-profile Name=Jenkins \
  --region eu-west-2

Step 4: Configure EKS aws-auth Access

If you have an EKS cluster, you’re probably familiar with aws-auth.yaml file for managing cluster authentication. This YAML file contains the EC2 nodes and IAM users that are allowed to access the Kubernetes API server.

To simplify:

It means that every kubectl command passes its AWS identity that was used with the command to the cluster. It first validates that the identity has access to it (this is the Role we have created), then it checks the identity in cluster RBAC using the aws-auth file.

Retrieve and edit aws-auth file

Amazon has a well-documented explanation about the files and retrieving them from their storage. Download the file or edit your existing one, which probably holds the role of the EKS nodes that you configured in the past. (It must have the EKS nodes role for joining the cluster)

If you have this ConfigMap applied to the cluster, edit the existing one without deleting anything in it. This might cause a lockout except for the original identity that created the cluster.

If you haven’t configured it yet (new cluster) download and edit the file

curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml

Edit the file with the role created

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::00000000000:role/full-eks-access-role
      username: jenkins
      groups:
        - system:masters

Apply the file from your station kubectl apply -f ./aws-auth-cm.yaml

Step 5: Configure the EC2 Instance

Now that we have a role setup and the instance profile is attached to the EC2, connect to the EC2 instance.

Install the following: pip, awscli, kubectl and aws-iam-authenticator

# pip
curl https://bootstrap.pypa.io/get-pip.py | sudo python3

# awscli
sudo pip install --upgrade awscli

# kubectl - with an install script I made
curl https://raw.githubusercontent.com/omerh/scripts/master/upgrade_kubectl.sh | sudo bash

# aws-iam-authenticator
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
sudo mv ./aws-iam-authenticator /usr/local/bin/

Step 6: Create Kubeconfig and Verify Access

aws eks --region <your region> update-kubeconfig --name <eks cluster name>

Now that all is set, let’s check access to the EKS cluster:

kubectl get node

Next Steps

Now that you have secure EKS access, you might want to explore:

Conclusion

You’re done! Your EC2 instance now has secure access to your EKS cluster using IAM instance profiles instead of stored credentials.

Security Benefits

This approach provides several advantages:

  • No AWS credentials stored on EC2 instances
  • Automatic credential rotation through AWS STS
  • Fine-grained access control via IAM policies
  • Full audit trail through CloudTrail

Troubleshooting

If you encounter “Unauthorized” errors:

  1. Verify the IAM role ARN in aws-auth ConfigMap matches exactly
  2. Ensure the instance profile is properly attached to your EC2 instance
  3. Check that your EKS cluster’s aws-auth ConfigMap includes the new role

I hope that this post was helpful! Feel free to comment, ask questions, and share it with others. If this guide saved you time, consider buying me a coffee ☕ to support more content like this!