Aws Kubernetes EKS 1.32 will be the last release to support Amazon Linux 2 (AL2), 1.33 will support only AL2023 optimized image.
If you are using Terraform to manage the eks cluster, here it’s a simplified version of the code with worker nodes running on AL2:
resource "aws_eks_cluster" "my" {
name = "test"
...
}
data "aws_ami" "al2" {
filter {
name = "name"
values = ["amazon-eks-node-${aws_eks_cluster.my.version}-v*"]
}
most_recent = true
owners = ["602401143452"]
}
locals {
user_data = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.my.endpoint}' --b64-cluster-ca '${aws_eks_cluster.my.certificate_authority[0].data}' ${aws_eks_cluster.my.name}
USERDATA
}
resource "aws_launch_template" "worker" {
name_prefix = "test-worker"
image_id = data.aws_ami.al2.id
user_data = base64encode(local.userdata)
...
}
resource "aws_autoscaling_group" "my" {
launch_template {
id = aws_launch_template.al2.id
version = "$Latest"
}
...
}
in order to migrate to AL2023, you need to change the aws_ami to use the new AMI and the user_data from the custom script to the Nodeadm method:
resource "aws_eks_cluster" "my" {
name = "test"
...
}
data "aws_ami" "al2023" {
filter {
name = "name"
values = ["amazon-eks-node-al2023-x86_64-standard-${aws_eks_cluster.my.version}-v*"]
}
most_recent = true
owners = ["602401143452"]
}
locals {
user_data = <<USERDATA
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
name: ${aws_eks_cluster.my.name}
apiServerEndpoint: ${aws_eks_cluster.my.endpoint}
certificateAuthority: ${aws_eks_cluster.my.certificate_authority[0].data}
cidr: ${aws_eks_cluster.my.kubernetes_network_config[0].service_ipv4_cidr}
USERDATA
}
resource "aws_launch_template" "worker" {
name_prefix = "test-worker"
image_id = data.aws_ami.al2023.id
user_data = base64encode(local.user_data)
...
}
resource "aws_autoscaling_group" "my" {
launch_template {
id = aws_launch_template.al2023.id
version = "$Latest"
}
...
}
do a “terraform apply” and destroy a node, after some seconds a new node will spawn and you will notice that the “AMI Name” of the EC2 will be something like “amazon-eks-node-al2023-x86_64-standard-1.31-v20250203”, execute a “kubectl get nodes” and you should see the new node joined to the cluster.
AL2023 kubernetes node will work very similar to AL2, here it’s some hints that may be useful to you:
- AL2 included the “crictl” package for quick-and-dirty container’s debug on the node, AL2023 doesn’t include this package but include the “nerdctl” package that works very similar.
- AL2023 is bundled with SELinux, but its configured by default with “permissive” mode, it doesn’t officially support the “enforcing” mode. If you would like to use this method, follow the properly github issues.
- AL2023 doesn’t support IMDSv1 any more, switch to IMDSv2 by adding this code to aws_launch_template:
metadata_options {
http_tokens = "required"
http_put_response_hop_limit = 1
}
- If you need to pass some “kubelete-extra-args” options to AL2023, follow this example:
locals {
userdata = <<USERDATA
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
name: ${aws_eks_cluster.my.name}
apiServerEndpoint: ${aws_eks_cluster.my.endpoint}
certificateAuthority: ${aws_eks_cluster.my.certificate_authority[0].data}
cidr: ${aws_eks_cluster.my.kubernetes_network_config[0].service_ipv4_cidr}
kubelet:
flags:
- --node-labels=mynodegroup=ondemand
USERDATA
}
- Here it’s some links that was useful to me: