What is Kubernetes Security? Kubernetes Security Best Practices: 4C Model


Posted July 7, 2022 by jezzhoney8

Kubernetes operates mostly in a cloud environment that includes the cloud, clusters, containers, and codes. Kubernetes security is primarily concerned with the combination of your code, clusters, and the container

 
Kubernetes operates mostly in a cloud environment that includes the cloud, clusters, containers, and codes. Kubernetes security is primarily concerned with the combination of your code, clusters, and the container. Thus, security is built around all these features, including cluster security, container security, application security, and access control in a cloud environment.

It is necessary to continuously scan the entire system to identify misconfigurations and vulnerabilities as soon as possible to keep it secure. Additionally, pod container security and the Kubernetes API are important aspects of Kubernetes security. These APIs are essential for making Kubernetes scalable and flexible, and they include a significant amount of information.

There may be challenges in performing security practices on Kubernetes. For example, if a system has multiple different points that can be attacked via unauthorized entry or extraction of data, each point must be secured in order to secure the application. This large attack surface is a result of having many containers that could be exploited if not pre-emptively secured, taking up a lot of resources and time before or during deployment.

Importance of Kubernetes Security

Since Kubernetes security is a broad, complex, and critical topic, it deserves special attention. The complexity of K8s security is due to its dynamic and immutable nature and open-source usage, which requires constant attention and resources to keep secure. Organizations that use containers and Kubernetes in their production environments must take Kubernetes security very seriously, as should all other aspects of their IT infrastructure.

Clevrone helps if one compromises on a Kubernetes installation, it can lead to a large number of nodes that may be running different microservices being compromised. Thus, it poses a threat to the organization and may have an impact on its day-to-day operations. For example, there are two well-known types of CVEs, of which one is CVE-2022-0185 (integer underflow). Discovered by William Liu and Jamie Hill-Daniel in 2022, this CVE shows how an attacker can exploit an unbound write function to modify kernel memory and access any other processes running on that node.

Implementing Kubernetes Security Best Practices

Security Updates for the Environment: When containers are running with a variety of open-source software and other software, it is possible that a security vulnerability is discovered later on. It is, therefore, critical to scan images and keep up with software updates to see if any vulnerabilities have been discovered and patched. Using Kubernetes' rolling update capability, it is possible to gradually upgrade a running application by updating it to the most recent version available on the platform.

Security Context: Each pod and container has its own security context, which defines all of the privileges and access control settings that can be used by them. It is essentially a definition of the security that has been given to the containers, such as Discretionary Access Control (DAC), which requires permission to access an item based on the user id. It's possible that there are other program profiles in use that limit the functionality of the various programs. So, the security context refers to the manner in which we apply security to the containers and pods in general.

Resource Management: There are several types of resources available to every pod, including the CPU and memory, which are required to execute the containers. So, to declare a pod, we must first specify how many resources we want to allocate to every container. A resource request for the container in a pod is specified, and the Kube-Scheduler utilizes this information to determine which node the pod should be placed on in order to best utilize the resources. Following its scheduling, the container will not be able to consume any resources in excess of those that have been allotted to it by the system. You must first observe how requests are processed before assigning resources to the containers. Only after you have completed the observation can you provide the resources that are required for the containers.

Shift Left Approach: The default configurations of Kubernetes are typically the least secure. Essentially, Kubernetes does not immediately apply network policies to a pod when it is created. Containers in the pods are nearly immutable which means that they are not patched. Instead, they are simply destroyed and re-deployed when they become unavailable.

Shift left testing makes it possible to identify and solve flaws much sooner in the container lifecycle, allowing for more efficient container management. One can then handle issues in unit tests rather than identifying them in production. This shortens the development cycle, improves quality substantially, and allows for speedier progression to later phases such as security analysis and deployment.

Kubernetes Security Frameworks: An Overview

There are a variety of security frameworks that are available for the Kubernetes container orchestration system. A number of different organizations, such as MITRE, CIS, NIST, and others, have made the frameworks available. All of these frameworks have their own set of pros and cons, which allows the company to readily access them and apply anything they desire.

The CIS benchmark provides different methods and configuration management by which we can harden the security and configuration of the Kubernetes cluster. In accordance with the name, MITRE ATT&CK provides different options or scenarios by which an attacker can attack the environment and take advantage of them. It also defines different methods by which we can patch the vulnerabilities or configuration issues that are required to protect the entire environment. You can use the PCI-DSS for the Kubernetes environment to implement the compliance-related configurations that are necessary for the environment - this is commonly used for fintech purposes.

Kubernetes Security Best Practices: 4C Model

When constructing a defense-in-depth strategy, it is necessary to incorporate numerous security barriers in various areas; cloud-native security operates similarly and suggests implementing the same approach. The security techniques of Cloud Native Systems are divided into four different layers, which is referred to as "The 4C Security Model": Cloud, Cluster, Container, Code. Addressing all these layers ensures comprehensive security coverage from development to deployment. The best practices for Kubernetes can also be classified into these four categories of the cloud-native approach.

Prevent Access to KubeAPI Server: There are three steps in the Kubernetes API access control process. The request is validated first, then examined for authenticity, and finally, it is subjected to admission control before it grants access to the system. Check that the network access control and TLS connections are correctly configured before starting the authentication operation. Authentication procedures might be complicated.

Ideally, only TLS connections should be used for connections to the API server, internal communication within the Control Plane, and communication between the Control Plane and the Kubelet. This can be accomplished by supplying a TLS certificate and a TLS private key file to the API server using the kube-API server command line option.

The Kubernetes API uses two HTTP ports, designated as localhost and secure port, to communicate. The localhost port does not require TLS, so requests made through this port will avoid the authentication and authorization components. Therefore, you must make sure this port is not enabled outside of the Kubernetes cluster's test configuration.

Additionally, because of the potential for security breaches, service accounts should be thoroughly and rapidly audited, particularly if they are associated with a specific namespace and are used for specific Kubernetes management duties. Each application should have a specialized service account; the default service accounts should not be employed in the first place. In order to avoid the creation of an extra attack surface, it is necessary to disable default service account tokens when creating new pods if no individual service account is provided.

Encryption and Firewalling: “Secrets” in Kubernetes are objects used to hold sensitive information about users, such as passwords, keys, tokens, and many other types of information. They limit the exploitable attack surface. In addition, they provide flexibility to the pod life cycle so they can gain access to sensitive data. Secrets are namespaced objects (maximum length: 1 MB) kept in tmpfs on the nodes and password-protected.


The API server in etcd maintains secrets in plain text – that’s why it is necessary to enable encryption in the ETCD configuration file. As a result, even if an attacker has the ETCD data, he will not crack it. Cracking ETCD data is only possible if the attacker obtains the key used to encrypt the data; in Kubernetes, the key is held locally on the API server; otherwise, the data cannot be cracked.

For Kubernetes to function correctly, the organization must enable firewalls and open some ports. As an example, specific ports must be open on the Primary configuration. These include 6443, 2379-2380, 10250, 8472, and numerous others. In addition, several ports must be open on Worker Nodes, including ports 10250, 10255, and 8472.

Enforce TLS Between Cluster Components: Enabling TLS between the cluster components is critical for maintaining the cluster's security. Communication between APIs in a cluster is planned to be encrypted by default with the help of TLS. Certificate production and distribution across cluster components will be made possible by including appropriate certificates in the installation procedure in most installation methods. Because some components can be put on local ports, the administrator must become familiar with each component in order to be able to distinguish between trusted and untrusted communication in this situation.

Protect Nodes: Those responsible for the operations of your containerized apps are referred to as nodes. Because of this, it is essential to provide nodes with a secure environment. For example, a single server serving as both a master and a worker node for the cluster can test Kubernetes in a test environment. While specific applications may require more than three, most of them employ at least three: one for master components, one for worker nodes, and one for kubelet administration.

It can operate in either a virtual or a physical environment, depending on the organization's needs or regulatory compliance requirements, respectively.

Isolate the Cluster and API with Proper Configurations: Kubernetes cluster distributes data and resources among many users, clients, and the numerous applications already in use to share the cluster's resources. Therefore, the need to protect our data in the Kubernetes environment or run untrusted code in your environment arises from time to time. It is feasible to achieve either soft multi-tenancy or hard multi-tenancy. Tenants can gain access to shared data or gain administrative privileges by using host volumes or directories. Soft multi-tenancy assumes that tenants are non-malevolent, whereas multi-tenancy assumes that renters can be malicious. As a result, they do not believe in trust and adhere to a zero-trust policy.


Containers, pods, nodes, and clusters are just a few of the isolation layers available in Kubernetes. With the use of per-cluster DNS, clusters can give more network isolation. It is possible to run a pod with a distinct identity and access the resources required by pods if a collection of nodes is used. The use of firewall rules can achieve isolation in pods. Pods enable isolation through network policy, determining which pods are allowed to communicate with one another. The isolation of worker nodes is vital since it helps to prevent communication between namespaces and if the administrator desires to limit pod deployment to a particular group of nodes.

API servers should not be accessible from the public internet or many sources because doing so increases the attack surface and the likelihood of a compromise. However, with the help of rule-based access control rules, rotating the keys provided to users, and restricting access from IP subnets, it is possible to limit the risk.

In a Kubernetes cluster, kubelet is a daemon that runs on each node and is always in the background, processing requests. It is maintained by an INIT system or a service manager, and a Kubernetes administrator must configure it before it can be used. The setup details of all Kubelets within a cluster must be identical to one another.

Cluster: Since Kubernetes is the most widely used container orchestration tool, we will concentrate our attention on it while discussing cluster security in general. Therefore, all security recommendations are limited to safeguarding the cluster itself in this section.

Using Kubernetes Secrets for Application Credentials: As mentioned earlier, a Secret is an object that contains sensitive data, such as a password or token, which can be used to authenticate an account. It is essential to know where and how sensitive data like passwords and keys are kept and accessible. A pod cannot read the secrets, but this does not negate the importance of maintaining secrecy in a distinct location. Otherwise, the secret might be accessed by anybody having a copy of the image. Those apps that handle several processes and are accessible to the general public are particularly vulnerable. Secrets should not be passed around as environment variables but instead mounted as read-only volumes in your containers.

kubectl create secret generic dev-secret --from-literal='username=my-application' --from-literal='password=anything123'
Apply Least Privilege on Access


Most Kubernetes components require authorization. Thus, you need to be logged in for your request to access the cluster's resources to be authorized. Role-based Access Control (RBAC) determines whether an entity can call the Kubernetes API to perform a specific action on a specific resource.

RBAC authorization makes use of the rbac.authorization.k8s.io API group to make authorization decisions, which allows you to change policies on the fly. Activating RBAC requires an authorization flag set to a comma-separated list that contains RBAC and then restarting the API server.

kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options
Consider the following illustration. As a role in the "default" namespace, it only provides the pods for reading purposes.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: read-user
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

Use Admission Controllers and Network Policy: Admission controllers are plugins for the Kubernetes container orchestration system that govern and enforce how the cluster is used. This type of process might be regarded as a gatekeeper, intercepting (authenticated) API requests and altering them or even denying them entirely.

Admission controllers can improve security by requiring a suitable security baseline across an entire namespace or cluster of computers. In this regard, the built-in PodSecurityPolicy admission controller is likely the most notable example; it may be used to prevent containers from operating as root or ensure that the container's root filesystem is permanently mounted read-only, among other things. Also available is a webhook controller, which allows for retrieving images exclusively from particular registries known to the organization while limiting access to unfamiliar image registries. Webhook controllers can also reject deployments that do not comply with security standards.

Container: Container runtime engines are required in order for the container to run in a cluster environment. Docker is, by far, the most common container runtime environment (CRE) for use with Kubernetes. In addition, it already contains a variety of images that developers may use to configure everything from Linux servers to Nginx servers.

Use Verified Images with Proper Tags: It is critical to scan images and only use those images permitted by the organization's policy because, due to the usage of unknown and vulnerable images, organizations are more vulnerable to the risk of being compromised. In addition, because much code is obtained from open-source projects, it is good to scan images before rolling them. Finally, the container registry is the central repository for all container images.

If a public registry is required, it can be used; however, it is usually recommended to use private registries as the container registry where possible. These private repositories can only be used to store images that have been approved. Thus, the risk of using vulnerable images is reduced, and the harm posed to organizations due to supply chain attacks is also reduced.

You can also install the image scanning process at the CI; image scanning is a process in which a container image's contents and build process are inspected to find vulnerabilities, CVEs, or poor practices. Because it is being incorporated into the CI/CD pipeline, it can prevent vulnerabilities from ever reaching a registry. In addition, it will protect the registry from any vulnerabilities introduced by third-party images utilized in the pipeline.

It is conceivable that the image you scan does not have the same dependencies or libraries as the image you deploy in your Kubernetes cluster. If this is the case, you will need to scan again. In addition, when you use tags, such as "latest" or "staging," it can be difficult to tell whether the most recent scan results are still valid because of the tags.

The use of changeable tags may result in containers containing distinct versions of the same image from the same image when the tags are utilized. In addition to the security issues suggested by the scan results, this can result in difficulties that are difficult to troubleshoot. Therefore, wherever possible, use immutable tags like nginx:deploy-082022 to determine if an application was deployed and what image was used.

Limit Application Privileges


It is critical to avoid privilege escalation when running a container as root, one of the most compelling arguments supporting this practice. A root user within a container has the same privileges as a root user on a typical host system when it comes to command execution.

Examine tasks such as installing software packages, launching services, creating users, and other similar activities. For application developers, this presents several challenges. Additionally, while running apps on a Virtual Machine, you should avoid running them as the root user to avoid security risks. Likewise, this is true in the case of containers.

If someone gains access to the container, the root user will access and execute anything on the underlying host as if they were logged in as the root user. The risk of this includes:

Accessing filesystem mounts,
Gaining access to username/password combinations that have been configured on the host for connecting to other services (yet another reason to use a secrets manager solution),
Installing malicious software on the host, and
Gaining access to other cloud resources.
Code

Although we may be running various applications in the container, the code layer is sometimes called application security. It is also the layer over which businesses have the greatest control. Because the code of your apps and their related databases is the most crucial element of the system and is typically open to the internet, attackers will concentrate their efforts on this section of the system if all other components are securely protected.

Scan For Vulnerabilities:The majority of application software primarily rely on open-source packages, libraries, and other third-party components, so a vulnerability in any one of these dependencies can undoubtedly affect the complete functionality. Consequently, the likelihood that at least some of the programs that you are currently using contain vulnerabilities are significant.

Attackers frequently target deployments by exploiting known vulnerabilities in widely used dependency code; therefore, we must place some mechanism to verify those dependencies regularly. Scanners may be included in the lifecycle of a container image deployed into Kubernetes, which can help decrease the possible attack surface and prevent attackers from stealing data or interfering with your deployment. These scanners will scan the dependencies and notify a vulnerability or update to the container, securing the container on the code level.

Conclusion

Cloud-native containers running in the Kubernetes environment are now considered part of the modern infrastructure. Microservices are now being used on a large scale in every enterprise, making it imperative that all security best practices be adhered to at all times. Because microservice operations handle a variety of data types, including personally identifiable information (PII) and other sensitive information, it is necessary to adhere to all of the security requirements to create a secure environment.

If you expect your small business to grow, you are short on capital, or you don’t have technology expertise, cloud computing could be the solution. It can add real value and can take your small business to the enterprise level.

ClevrOne is providing Kubernetes Security solutions to many global SMEs. Our cloud-based solutions are adding a lot of value to their entire business process. If you feel it’s your time to experience the ride of cloud, Drop a message here.
-- END ---
Share Facebook Twitter
Print Friendly and PDF DisclaimerReport Abuse
Contact Email [email protected]
Issued By clevrone
Country United States
Categories Advertising
Tags clevrone , cloudcoumputing , cloudservices , cybersecurity , kubernetessecurity , security , web , websecurity
Last Updated July 7, 2022