Implementing Policies in Kubernetes
Kubernetes, as we know, coordinates a highly available cluster of computers that are connected to work as a single unit. Kubernetes contains a number of abstractions that allow deployment of containerized applications to the cluster without attaching them to individual machines.
In short, Kubernetes is –
- Portable: public, private, hybrid, multi-cloud
- Extensible: modular, pluggable, hook able, composable
- Self-healing: auto-placement, auto-restart, auto-replication, auto-scaling
In this post, I’ll explain what Kubernetes policies are and how they can help you manage and secure the Kubernetes cluster. We would also be looking at why we need a policy engine to manage, author the policies.
Cross posted from : InfoQ.com
Quick Snapshot
Kubernetes Policies – Introduction
In simplest terms, policies define what end-users can do on the cluster and possible ways to ensure that clusters are in compliance with organization policies. Policies could be of type Governance, for example, to meet some of the organizational conventions or it could be to meet legal requirements or to enforce best practices.
Policy-enablement empowers organizations to take control of Kubernetes operation and ensure that clusters are in compliance with organization policies.
Simplified Operations | Governance team can make updates to policies at any time without recompilation or redeployment of services. |
Ease of Policy Enforcement | 100% Fully Automated deployment, discovery and enables uniformity of policy deployment, compliance, conflicts etc., |
Automated discovery of violations, conflicts | |
Flexible to the changing requirements | Policy authors can read, write, and manage rules without any need for special development or operational knowledge. |
Key benefits of Policy Enablement
By default, containers run with unbounded compute resources on a Kubernetes cluster. To limit or restrict you have to implement appropriate policies in the following ways:
- Network Policies – To define how groups of pods are allowed to communicate with each other and other network endpoints – Use NetworkPolicy resources labels to select pods and define rules that specify what traffic is allowed to the selected pods.
- Volume Policies – Kubernetes scheduler has default limits on the number of volumes that can be attached to a Node. To define the maximum number of volumes that can be attached to a Node for various cloud providers, use Node-specific Volume Limits.
- Resource Usage Policies – To enforce constraints on resource usage, use Limit Range option for appropriate resource in the namespace
- Compute resource usage per Pod or Container.
- Storage request per PersistentVolumeClaim.
- Ratio between request and limit.
- Set default request/limit for compute resources and automatically inject them to Containers at runtime.
- Resource consumption Policies – To limit aggregate resource consumption per namespace, use below Resource Quotas
- Compute Resource Quota
- Storage Resource Quota
- Object Count Quota
- Limits the number of resources based on scope defined in Quota Scopes option
- Requests vs Limits – Each container can specify a request and a limit value for either CPU or memory.
- Quota and cluster capacity – Expressed in absolute units
- Limit Priority Class consumption by default – For example, restrict usage of certain high priority pods
- Access Control Policies – To allow/deny fine-grained permissions, use RBAC (Role-Based Access Control) and rules can be defined to allow/deny fine-grained permissions. For example, an “autoscaler” role may have permission to “update” deployments in a specific namespace for changing their number of replicas.
- Security Policies – To define & control security aspects of Pods, use Pod Security Policy (available on v1.15). According to Kubernetes Documentation, it would enable fine-grained authorization of pod creation and updates. Defines a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for the related fields. They allow an administrator to control the following:
- Running of privileged containers
- Usage of host namespaces
- Usage of host networking and ports
- Usage of volume types
- Usage of the host filesystem
- Restricting escalation to root privileges
- The user and group IDs of the container
- AppArmor or seccomp or sysctl profile used by containers
Limitations of Kubernetes Policies
As you can see from above there is NO single security configuration for Kubernetes. For example, to define what a specific user can do, the groups they belong to, the actions they can perform on various Kubernetes resources (pods, deployments, services, etc), the network and pod security policies that apply to the objects they create, etc. cannot be expressed as rules across different policy components.
Due to the lack of single-point security solution, ensuring compliance manually can be error-prone and frustrating. There is a need for a lightweight general-purpose policy engine that and allows developers to operate independently without sacrificing compliance and also ensures ease of policy enforcement, automated discovery of violations, conflicts. Policy Authors would be also able to author and deploy custom policies that control the behavior of the service’s policy-enabled features.
Introducing Open Policy Agent Gatekeeper
Open Policy Agent Gatekeeper enforces policies and strengthens governance on the Kubernetes cluster. Following are the key functionalities it provides:
- Extensible, parameterized policy library.
- High-level declarative language (Rego) to author fine-grained policies in the system.
- Native Kubernetes CRDs for instantiating the policy library – Allows the definition of “constraints” wherein you want a system to meet a given set of requirements.
- Native Kubernetes CRDs for extending the policy library – Allows definition of “constraint templates” that allows users to declare new Constraints.
- Audit functionality – Allows periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect any mismatches.
- Test framework that you can use to write tests for policies. By writing tests for policies, the development process of new rules is accelerated and time saved.
Kubernetes provides Admission controller webhooks (HTTP Callbacks) to intercept admission requests before they are persisted as objects in Kubernetes, OPA Gatekeeper uses the same for making policy decisions from the API Server. Once all object modifications are complete, and incoming object is validated by the API server, validating admission webhooks are invoked and they can either reject or accept requests to enforce policies.
Gatekeeper enforces CRD-based policies executed by Open Policy Agent and thus enables users to have customize admission control via configuration.
Key Concepts
- Validation of Controls – Once all the Gatekeeper is installed in the cluster, the API server will then trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created, updated, or deleted. During the validation process, Gatekeeper acts as a bridge between the API server and OPA. API Server will enforce all policies executed by OPA.
- Policies / Constraints – Constraint is a declaration that wants a system to meet a given set of requirements. Each Constraint is written with Rego, a declarative query language to enumerate instances of data that violate the expected state of the system. All Constraints are evaluated as a logical AND. If one Constraint is not satisfied, then the whole request is rejected.
- Audit Functionality – Enables periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect pre-existing misconfigurations.
- Data replication is required by Constraints that need access to objects in the cluster other than the object under evaluation. For example, a Constraint that enforces uniqueness of ingress hostname must have access to all other ingresses in the cluster.
Implementing Simple Constraint / ConstraintTemplate with OPA Gatekeeper
In this example, we would be defining a new constraint template and constraint that requires all labels to be present and valid. Here, I’m going to use the samples that come up with the OPA Gatekeeper installation.
Installation
To deploy a released version of Gatekeeper on the cluster with a prebuilt image, run the following command.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
We have our Kubernetes cluster ready, let’s install Gatekeeper with a prebuilt image.
Gatekeeper Role, CRDs are now installed. The next step is to create a new constraint template to enforce labels on the namespace to be present and valid.
Define constraint template(s)
ConstraintTemplate defines what needs to be enforced and the schema of the constraint. Here if you notice the openAPIV3Schema and targets the constraint field allows users to fine-tune the behavior of a constraint.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
Install ConstraintTemplate
with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/templates/k8srequiredlabels_template.yaml
ConstratintTemplate
is created, the next step is to define constraint and apply it to Namespace.
Define constraints
Following constraint uses the K8sRequiredLabels
constraint template defined in the previous step. The next step is to use constraints to make sure the gatekeeper label is defined on all namespaces.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
match
field, which defines the scope of objects to which a given constraint will be applied.
kinds
accept a list of objects with apiGroups and kinds of fields that list the groups/kinds of objects to which the constraint will apply.namespaces
is a list of namespace names. If defined, a constraint will only apply to resources in a listed namespace.
labelSelector, namespaceSelector
is a standard Kubernetes label and namespace selector.
Install above Constraint with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
ConstratintTemplate
is created, the next step is to define constraint and apply it to Namespace.
Define constraints
The following constraint uses the K8sRequiredLabels constraint template defined in the previous step. The next step is to use constraints to make sure the gatekeeper label is defined on all namespaces.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
match
field, which defines the scope of objects to which a given constraint will be applied.
kinds
accept a list of objects with apiGroups and kinds of fields that list the groups/kinds of objects to which the constraint will apply.namespaces
is a list of namespace names. If defined, a constraint will only apply to resources in a listed namespace.
labelSelector, namespaceSelector
is a standard Kubernetes label and namespace selector.
Install above Constraint with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
Now that ConstraintTemplate & Constraint is enabled, let’s try out to create new namespace without label.
As you can see, OPA Gatekeeper has prevented namespace creation without labels. Next, we can look at the example on how to set container limits policy.
Implementing Container Limits Constraint/ConstraintTemplate with OPA Gatekeeper
In this example, we would be defining a new constraint template and constraint that requires container limits to be specified during the definition of Pod.
We are going to reuse the Kubernetes cluster with Gatekeeper components installed in the previous demo. Our first step is to define the constraint template.
Define constraint template(s)
ConstraintTemplate defines what needs to be enforced and the schema of the constraint. Here limits are defined in k8scontainterlimits_template.yaml
.
Install ConstraintTemplate
with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/agilebank/templates/k8scontainterlimits_template.yaml
Define constraint
The next step is to define constraints to make sure that CPU and memory should be equal or less than 200m and 1Gi limits.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
name: container-must-have-limits
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
cpu: "200m"
memory: "1Gi"
Now that we have ConstraintTemplate & Constraint created, let’s try out creating new resources without limits.
apiVersion: v1
kind: Pod
metadata:
name: opa
namespace: production
labels:
owner: me.agilebank.demo
spec:
containers:
- name: opa
image: openpolicyagent/opa:0.9.2
args:
- "run"
- "--server"
- "--addr=localhost:8080"
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/agilebank/bad_resources/opa_no_limits.yaml
As you can see, ConstraintTemplate & Constraint
restricts the pod creation without limits.
Congrats! We have successfully enforced policies with the OPA Gatekeeper Policy engine.
To uninstall Gatekeeper policy engine, first clean up old Constraints, ConstraintTemplates, and the Config resource in the gatekeeper-system namespace and then uninstall Gatekeeper. Currently, the uninstall action only removes the Gatekeeper system. This will make sure all finalizers are removed by Gatekeeper. Otherwise, the finalizers will need to be removed manually.
Conclusion
Though Open Policy Agent Gatekeeper enables Kubernetes administrators to have fine-grained policy-based control across the stack but applying policies are not without challenges because of the complex nature of deploying applications.
References
- Kubernetes Admission Webhooks
- Kubernetes Limit Ranges
- Kubernetes Resource Quotas
- Kubernetes Pod Security Policies
- Kubernetes Storage Limits
- Open Policy Agent
- Gatekeeper Github
- Gatekeeper Samples
Useful Resources :
- 10 BEST Kubernetes monitoring tools
- Kubernetes Tutorial: Distributed tracing with Jaeger
- Helm 3.0.0 is out,here is what has changed!
- ULTIMATE GUIDE to Coursera Specializations That Will Make Your Career Better (Over 100+ Specializations covered)
- Google Cloud Courses Collection
Like this post? Don’t forget to share it!
[…] Implementing Policies in Kubernetes […]
[…] any of the tools like Open Policy Agent Gatekeeper policy engine to manage, author the […]