22

Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly attached.

Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.

aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:

kubectl get svc
error: the server doesn't have a resource type "svc"

I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:

kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)

aws-iam-authenticator token -i <my cluster name> seems to work fine.

The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?

Or ultimately, how do I proceed and gain access to my cluster using kubectl?

Rico
  • 58,485
  • 12
  • 111
  • 141
sbs
  • 1,139
  • 3
  • 13
  • 19
  • I suspect you need to apply the auth config as the account that created the cluster in the first place. – Oliver Charlesworth Nov 12 '18 at 17:12
  • 3
    How do I get auth details of "the account that created the cluster", when I used the web interface to create the cluster (which only lets you specify an IAM role, not a user)? I just have my account that I log in to AWS with, 1 IAM user that I'm currently trying and failing to use, and 1 IAM role, as per the guide. – sbs Nov 13 '18 at 09:29
  • Beware, this topic is very tricky if you use federated account and assume role on login. I'm starting to think that EKS is just not prepared for this from UI console side. Moreover, sbs asked right question - how would I know who created the cluster. Well, you can't - that is "implementation secret". – Marcin W. Apr 05 '22 at 09:01

6 Answers6

22
  1. As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
  2. The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.

I'm not sure about how do you pass credentials for the aws-iam-authenticator:

  • If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
  • Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces

Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.

Also, this answer may be useful for the understanding of the first EKS user creation.

Ivan Kalita
  • 2,197
  • 1
  • 18
  • 31
  • As I noted as the thing that confuses me, an IAM user did not create the EKS cluster. I used the web interface to create it, and that only asked for an IAM role. I used `aws configure` to set my access key, secret and default region, after which my ~/.aws/credentials has a single [default] block with the access and secret. – sbs Nov 13 '18 at 09:26
  • 1
    During the EKS creation (even from the web interface) you specify service role ARN – this is a role that will be used internally by EKS and you don't need to pay a lot of attention on this role right now. When you created the EKS through web interface you was logged in as some IAM AWS user, right? Try to use that user credentials to obtain the EKS access. – Ivan Kalita Nov 13 '18 at 15:41
  • I don't think I know how to log in as an IAM user. I mean, I have to log in as "me", and then I created my first IAM user as part of following the guide. So I wasn't an IAM user to start with. What is the correct way to log in AWS website as an IAM user? – sbs Nov 13 '18 at 15:49
  • 1
    You logged in as a root user (I guess). Please try "Creating Access Keys for the Root User" of this manual https://docs.aws.amazon.com/en_us/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_add-key to get your root user AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Then, please, try to use these credentials to get access to the cluster. Also, using root account is not the best practice, so I'll suggest you to: get root credentials, get access to kubectl, create new IAM user, add this user to the aws-auth configmap inside the cluster and then deactivate root user credentials :) – Ivan Kalita Nov 13 '18 at 15:53
  • Thanks, yes, using root user access keys gives me access. If you note the root user issue in your answer, I can accept it. – sbs Nov 14 '18 at 16:33
9

Here are my steps using the aws-cli


$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"

$ aws eks update-kubeconfig \
  --region us-west-2 \
  --name my-cluster

>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config

Bonus, use kubectx to switch kubectl contexts

$ kubectx 

>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two     arn:aws:eks:us-east-1:#####:cluster/my-cluster  

$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster


>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".

Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html

jmcgrath207
  • 1,317
  • 2
  • 19
  • 31
3

After going over the comments I think it seems that you:

  1. Have created the cluster with the root user.
  2. Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
  3. Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).

And here is the problem as described in the docs:

If you receive one of the following errors while running kubectl commands, then your kubectl is not configured properly for Amazon EKS or the IAM user or role credentials that you are using do not map to a Kubernetes RBAC user with sufficient permissions in your Amazon EKS cluster.

  • could not get token: AccessDenied: Access denied
  • error: You must be logged in to the server (Unauthorized)
  • error: the server doesn't have a resource type "svc" <--- Your case

This could be because the cluster was created with one set of AWS credentials (from an IAM user or role), and kubectl is using a different set of credentials.

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.

For more information, see Managing users or IAM roles for your cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

This is the cause for the errors.

As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.

Rot-man
  • 18,045
  • 12
  • 118
  • 124
2

Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:

aws sts get-caller-identity

Afterwards use:

aws eks --region region update-kubeconfig --name cluster_name

This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.

Afterwards you can follow the kubectl instructions for installation and this should work.

ouflak
  • 2,458
  • 10
  • 44
  • 49
2

For those working with multiple profiles in aws cli. Here is what my setup looks like:

~/.aws/credentials file:

  1 [prod]
  2 aws_access_key_id=****
  3 aws_secret_access_key=****
  4 region=****
 11 [dev]
 12 aws_access_key_id=****
 13 aws_secret_access_key=****

I have two aws profiles prod and dev.

Generate kubeconfig for both prod and dev clusters using

$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod

This profile metadata is stored in the config file (~/.kube/config) as well.

Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.

$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod

Switch to dev cluster.

$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".

Similarly we can view/change namespace in the current cluster using kubens.

Sumit Jha
  • 2,095
  • 2
  • 21
  • 36
-5

please use your updated secret key & access key id for connection with EKS cluster.

R. Pandey
  • 13
  • 3