How to author and enforce policies using Open Policy Agent Gatekeeper
As of this writing, there is NO single security configuration for Kubernetes. For example, to define what a specific user can do, the groups they belong to, the actions they can perform on various Kubernetes resources (pods, deployments, services, etc), the network and pod security policies that apply to the objects they create, etc. cannot be expressed as rules across different policy components.
Due to the lack of a single-point security solution, ensuring compliance manually can be error-prone and frustrating. There is a need for a lightweight general-purpose policy engine that and allows developers to operate independently without sacrificing compliance and also ensures ease of policy enforcement, automated discovery of violations, conflicts. Policy Authors would be also able to author and deploy custom policies that control the behavior of the service’s policy-enabled features.
In this post, I’ll explain how to use the Open Policy Agent Gatekeeper policy engine to manage, author the policies, it allows you to manage and secure the Kubernetes cluster.
Introducing Open Policy Agent Gatekeeper
Open Policy Agent Gatekeeper enforces policies and strengthens governance on the Kubernetes cluster. Following are the key functionalities it provides:
- Extensible, parameterized policy library.
- High-level declarative language (Rego) to author fine-grained policies in the system.
- Native Kubernetes CRDs for instantiating the policy library – Allows the definition of “constraints” wherein you want a system to meet a given set of requirements.
- Native Kubernetes CRDs for extending the policy library – Allows definition of “constraint templates” that allows users to declare new Constraints.
- Audit functionality – Allows periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect any mismatches.
- Test framework that you can use to write tests for policies. By writing tests for policies, the development process of new rules is accelerated and time saved.
Kubernetes provides Admission controller webhooks (HTTP Callbacks) to intercept admission requests before they are persisted as objects in Kubernetes, OPA Gatekeeper uses the same for making policy decisions from the API Server. Once all object modifications are complete, and incoming object is validated by the API server, validating admission webhooks are invoked and they can either reject or accept requests to enforce policies.
Gatekeeper enforces CRD-based policies executed by Open Policy Agent and thus enables users to have customized admission control via configuration.
Key Concepts
- Validation of Controls – Once all the Gatekeeper is installed in the cluster, the API server will then trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created, updated, or deleted. During the validation process, Gatekeeper acts as a bridge between the API server and OPA. API Server will enforce all policies executed by OPA.
- Policies / Constraints – Constraint is a declaration that wants a system to meet a given set of requirements. Each Constraint is written with Rego, a declarative query language to enumerate instances of data that violate the expected state of the system. All Constraints are evaluated as a logical AND. If one Constraint is not satisfied, then the whole request is rejected.
- Audit Functionality – Enables periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect pre-existing misconfigurations.
- Data replication is required by Constraints that need access to objects in the cluster other than the object under evaluation. For example, a Constraint that enforces the uniqueness of ingress hostname must have access to all other ingresses in the cluster.
#1.Implementing Simple Constraint / ConstraintTemplate with OPA Gatekeeper
In this example, we would be defining a new constraint template and constraint that requires all labels to be present and valid. Here, I’m going to use the samples that come up with the OPA Gatekeeper installation.
Installation
To deploy a released version of Gatekeeper on the cluster with a prebuilt image, run the following command.
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
We have our Kubernetes cluster ready, let’s install Gatekeeper with a prebuilt image.
Gatekeeper Role, CRDs are now installed. The next step is to create a new constraint template to enforce labels on the namespace to be present and valid.
Define constraint template(s)
ConstraintTemplate defines what needs to be enforced and the schema of the constraint. Here if you notice the openAPIV3Schema and targets the constraint field allows users to fine-tune the behavior of a constraint.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
Install ConstraintTemplate
with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/templates/k8srequiredlabels_template.yaml
ConstratintTemplate
is created, the next step is to define constraint and apply it to Namespace.
Define constraints
Following constraint uses the K8sRequiredLabels
constraint template defined in the previous step. The next step is to use constraints to make sure the gatekeeper label is defined on all namespaces.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
match
field, which defines the scope of objects to which a given constraint will be applied.
kinds
accept a list of objects with apiGroups and kinds of fields that list the groups/kinds of objects to which the constraint will apply.namespaces
is a list of namespace names. If defined, a constraint will only apply to resources in a listed namespace.
labelSelector, namespaceSelector
is a standard Kubernetes label and namespace selector.
Install above Constraint with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
ConstratintTemplate
is created, the next step is to define constraint and apply it to Namespace.
Define constraints
The following constraint uses the K8sRequiredLabels constraint template defined in the previous step. The next step is to use constraints to make sure the gatekeeper label is defined on all namespaces.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
match
field, which defines the scope of objects to which a given constraint will be applied.
kinds
accept a list of objects with apiGroups and kinds of fields that list the groups/kinds of objects to which the constraint will apply.namespaces
is a list of namespace names. If defined, a constraint will only apply to resources in a listed namespace.
labelSelector, namespaceSelector
is a standard Kubernetes label and namespace selector.
Install above Constraint with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
Now that ConstraintTemplate & Constraint is enabled, let’s try out to create new namespace without label.
As you can see, OPA Gatekeeper has prevented namespace creation without labels. Next, we can look at the example on how to set container limits policy.
#2.Implementing Container Limits Constraint/ConstraintTemplate with OPA Gatekeeper
In this example, we would be defining a new constraint template and constraint that requires container limits to be specified during the definition of Pod.
We are going to reuse the Kubernetes cluster with Gatekeeper components installed in the previous demo. Our first step is to define the constraint template.
Define constraint template(s)
ConstraintTemplate defines what needs to be enforced and the schema of the constraint. Here limits are defined in k8scontainterlimits_template.yaml
.
Install ConstraintTemplate
with the following command
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/agilebank/templates/k8scontainterlimits_template.yaml
Define constraint
The next step is to define constraints to make sure that CPU and memory should be equal or less than 200m and 1Gi limits.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sContainerLimits
metadata:
name: container-must-have-limits
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
cpu: "200m"
memory: "1Gi"
Now that we have ConstraintTemplate & Constraint created, let’s try out creating new resources without limits.
apiVersion: v1
kind: Pod
metadata:
name: opa
namespace: production
labels:
owner: me.agilebank.demo
spec:
containers:
- name: opa
image: openpolicyagent/opa:0.9.2
args:
- "run"
- "--server"
- "--addr=localhost:8080"
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/agilebank/bad_resources/opa_no_limits.yaml
As you can see, ConstraintTemplate & Constraint
restricts the pod creation without limits.
Congrats! We have successfully enforced policies with the OPA Gatekeeper Policy engine.
To uninstall the Gatekeeper policy engine, first clean up old Constraints, ConstraintTemplates, and the Config resource in the gatekeeper-system namespace and then uninstall Gatekeeper. Currently, the uninstall action only removes the Gatekeeper system. This will make sure all finalizers are removed by Gatekeeper. Otherwise, the finalizers will need to be removed manually.
Like this post? Don’t forget to share it!
Useful Resources :
- Open Policy Agent
- Gatekeeper Github
- Gatekeeper Samples
- 10 BEST Kubernetes monitoring tools
- Kubernetes Tutorial: Distributed tracing with Jaeger
- Helm 3.0.0 is out, here is what has changed!
- ULTIMATE GUIDE to Coursera Specializations That Will Make Your Career Better (Over 100+ Specializations covered)
- Google Cloud Courses Collection
Average Rating