This post describes configurable tolerance for horizontal Pod autoscaling, a new alpha feature first available in Kubernetes 1.33. What is it? Horizontal Pod Autoscaling is a well-known Kubernetes feature that allows your workload to automatically resize by adding or removing replicas based on resou...
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Kubernetes v1.33: HorizontalPodAutoscaler Configurable Tolerance -
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip How to see what is using flannel or circumvent flannel address usage in kubernetes?
[EDIT (solved)]: Turns out, cilium did not remove its network links, and somehow kept updating to my current CIDIR leading to a duplicate, removing the links worked.
I keep on getting issues with CNI and networking.. I just want my cluster to work.. anyways
undefined
Apr 28 17:14:30 raspberrypi k3s[2373903]: time="2025-04-28T17:14:30+12:00" level=error msg="flannel exited: failed to register flannel network: failed to configure interface flannel.1: failed to set interface flannel.1 to UP state: address already in use"
How do i see what is using flannel Here is my server arguments:
undefined
ExecStart=/usr/local/bin/k3s \ server \ --kubelet-arg=allowed-unsafe-sysctls=net.core.rmem_max,net.core.wmem_max,net.ipv4.ip_forward \ --flannel-backend vxlan \ --disable=traefik \ --write-kubeconfig-mode 644
So I am using the default flannel backend, I tried repeatedly uninstalling then re-installing k3s, I deleted the current flannel interface with ip link, there
-
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip Memory issues with cilium despite plenty of memory being available
spiderunderurbed@raspberrypi:~/k8s $ kubectl logs cilium-envoy-chzf8 -n kube-system
external/com_github_google_tcmalloc/tcmalloc/system-alloc.cc:625] MmapAligned() failed - unable to allocate with tag (hint, size, alignment) - is something limiting address placement? 0x177840000000 1073741824 1073741824 @ 0x555b5fccc4 0x555b5f90e0 0x555b5f89a0 0x555b5d81d0 0x555b5f6694 0x555b5f6468 0x555b5cd988 0x555b4e3c84 0x555b4e09a0 0x7fb3918614 external/com_github_google_tcmalloc/tcmalloc/arena.cc:58] FATAL ERROR: Out of memory trying to allocate internal tcmalloc data (bytes, object-size); is something preventing mmap from succeeding (sandbox, VSS limitations)? 131072 632 @ 0x555b5fd034 0x555b5d8260 0x555b5f6694 0x555b5f6468 0x555b5cd988 0x555b4e3c84 0x555b4e09a0 0x7fb3918614 spiderunderurbed@raspberrypi:~/k8s $
Does anyone know how to fix the memory issue with cilium? or could link me to the docs or any issues about this. I just followed the instructions to install cilium, most stuff is
-
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip Traefik is not running properly, kube-apiserver pod might be down
[EDIT] Soo.. kinda fixed? It was my backend, turns out, it forwards /nextcloud onto the nextcloud service, which does not know what to do with it unless I set something like site-url to include that path. So I made a middleware to strip the prefix, but now it cannot access any of its files because it will use the wrong path. I will look for siteurl settings but I dont think all of my services have one, so any advice would be appreciated for a general solution
So currently my raspberrypi is connected to my internet under the ip, 192.168.68.77, (I configured traefik to work with that host and alternative hosts if need be). According to traefik logs I think that it does not work because it is missing access to the api server, although i could be wrong, i installed traefik via helm, and I have a config file for it, and disabled the default traefik given by k3s. here is the traefik config and logs: config: https://pastebin.com/XYH2LKF9 logs: https://pastebin.com/sbjPZCXv pods and svcs (al
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Kubernetes v1.33: User Namespaces enabled by default!In Kubernetes v1.33 support for user namespaces is enabled by default. This means that, when the stack requirements are met, pods can opt-in to use user namespaces. To use the feature there is no need to enable any Kubernetes feature flag anymore! In this blog post we answer some common questions ab...
-
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip How to get kubernetes to add all its internal dns entries to your own dns server
By this I mean, I have a powerdns server running in my cluster, I would like Kubernetes to add/update dns entries in my dns server to reflect all services or any domains that would be used within the cluster, this is to fix a current issue I am having, and for general control and centralization purposes.
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Kubernetes v1.33: OctarineEditors: Agustina Barbetta, Aakanksha Bhende, Udi Hofesh, Ryota Sawada, Sneha Yadav Similar to previous releases, the release of Kubernetes v1.33 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and th...
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Kubernetes Multicontainer Pods: An OverviewAs cloud-native architectures continue to evolve, Kubernetes has become the go-to platform for deploying complex, distributed systems. One of the most powerful yet nuanced design patterns in this ecosystem is the sidecar pattern—a technique that allows developers to extend application functionality ...
-
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip Kubernetes DNS broke
spiderunderurbed@raspberrypi:~/k8s $ kubectl run -it --rm network-tools \
--image=nicolaka/netshoot \ --restart=Never \ -- /bin/bash If you don't see a command prompt, try pressing enter. network-tools:~# cat /etc/resolv.conf search default.svc.cluster.local svc.cluster.local cluster.local nameserver 10.43.0.10 options ndots:5 network-tools:~#
DNS does not work in my k8s cluster. I dont know how to debug this, this is all my logs are in Coredns and kubedns:
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
This probably isnt enough, but what more can I do to debug this? I dont think its anything to do with my CNI, I am using calico, 1.1.1.1 as a nameserver or any nameserver works, but the issue is that internal to external dns mappings do not work, dns cannot resolve outside. Maybe not inside either according to this:
undefined
spiderunderurbed@raspberrypi:~/k8s $
-
Kubernetes @programming.dev SpiderUnderUrBed @lemmy.zip NodeNotReady despite no pressure and an available network kubernetes
My cluster has been showing my raspberrypi node as "Ready" but according to the description of the node, the last log was "NodeNotReady" all debug guides say look for any pressure, like disk, pid, or so on, but there is no pressure, no absence of network. Here is the logs of my pi and pi status: https://pastebin.com/UULz6Hcy My pods are stuck in unknown (except jellyfin which is awaiting another node to come on): https://pastebin.com/vw2masAC A description of one of my pods if that helps: https://pastebin.com/s5W03s0E
also i already tried re-installing k3s
-
Kubernetes @programming.dev SinTan1729 @programming.dev Can anyone help me review a PR adding k8s to an app?
Someone added a PR to an app of mine adding instructions for k8s setup. I do like the idea of providing these instructions, but I don't have any experience with k8s whatsoever. The commits look fine to me, but in case anyone is experienced, I'd appreciate if you can take a look. I don't want to inadvertently add something malicious. Here's a link to the PR: https://github.com/SinTan1729/chhoto-url/pull/48, thanks.
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Introducing kube-scheduler-simulatorThe Kubernetes Scheduler is a crucial control plane component that determines which node a Pod will run on. Thus, anyone utilizing Kubernetes relies on a scheduler. kube-scheduler-simulator is a simulator for the Kubernetes scheduler, that started as a Google Summer of Code 2021 project developed by...
-
Kubernetes @programming.dev Nemeski @lemm.ee kubernetes.io Kubernetes v1.33 sneak peekAs the release of Kubernetes v1.33 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the overall health of the project. This blog post outlines some planned changes for the v1.33 release, which the release team believes you should be ...
-
Kubernetes @programming.dev beerclue @lemmy.world CVE-2025-1974: vulnerabilities that could make it easy for attackers to take over your Kubernetes cluster
kubernetes.io Ingress-nginx CVE-2025-1974: What You Need to KnowToday, the ingress-nginx maintainers have released patches for a batch of critical vulnerabilities that could make it easy for attackers to take over your Kubernetes cluster. If you are among the over 40% of Kubernetes administrators using ingress-nginx, you should take action immediately to protect...
When combined with today’s other vulnerabilities, CVE-2025-1974 means that anything on the Pod network has a good chance of taking over your Kubernetes cluster, with no credentials or administrative access required.
-
Kubernetes @programming.dev lemmyng @lemmy.ca kubernetes.io Introducing JobSetAuthors: Daniel Vega-Myhre (Google), Abdullah Gharaibeh (Google), Kevin Hannon (Red Hat) In this article, we introduce JobSet, an open source API for representing distributed jobs. The goal of JobSet is to provide a unified API for distributed ML training and HPC workloads on Kubernetes. Why JobSet?...
Authors: Daniel Vega-Myhre (Google), Abdullah Gharaibeh (Google), Kevin Hannon (Red Hat)
In this article, we introduce JobSet, an open source API for representing distributed jobs. The goal of JobSet is to provide a unified API for distributed ML training and HPC workloads on Kubernetes.
[...]
[T]he Job API fixed many gaps for running batch workloads, including Indexed completion mode, higher scalability, Pod failure policies and Pod backoff policy to mention a few of the most recent enhancements. However, running ML training and HPC workloads using the upstream Job API requires extra orchestration to fill the following gaps:
Multi-template Pods : Most HPC or ML training jobs include more than one type of Pods. The different Pods are part of the same workload, but they need to run a different container, request different resources or have different failure policies. A common example is the driver-worker pattern.
Job groups : Large scale training workloads span multiple network top
-
Kubernetes @programming.dev agilob @programming.dev www.sobyte.net K8sPrinciples of the Kubernetes SchedulerExplore the Kubernetes scheduler implementation principles and how you can define your own scheduling logic by implementing interfaces defined by extension points.
-
Kubernetes @programming.dev Evans @lemmy.ml k9s debug-container plugin
cross-posted from: https://lemmy.ml/post/20234044
Do you know about using Kubernetes Debug containers? They're really useful for troubleshooting well-built, locked-down images that are running in your cluster. I was thinking it would be nice if k9s had this feature, and lo and behold, it has a plugin! I just had to add that snippet to my
${HOME}/.config/k9s/plugins.yaml
, run k9s, find the pod, press enter to get into the pod's containers, select a container, and press Shift-D. The debug-container plugin uses the nicolaka/netshoot image, which has a bunch of useful tools on it. Easy debugging in k9s! -
Kubernetes @programming.dev Mac @programming.dev kubernetes.io Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from CacheKubernetes is renowned for its robust orchestration of containerized applications, but as clusters grow, the demands on the control plane can become a bottleneck. A key challenge has been ensuring strongly consistent reads from the etcd datastore, requiring resource-intensive quorum reads. Today, th...
-
Kubernetes @programming.dev Mac @programming.dev kubernetes.io Kubernetes Removals and Major Changes In v1.31As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project's overall health. This article outlines some planned changes for the Kubernetes v1.31 release that the release team feels you should be aware of for the continued maintenance of your...