CircleCI Server v3.x クラスターの管理

Network Topology

A server installation basically runs three different type of compute instances: The Kubernetes nodes, Nomad clients and external VMs.

It is highly recommended that you deploy these into separate subnets with distinct CIDR blocks. This will make it easier for you to control traffic between the different components of the system and isolate them from each other.

As always, the rule is to make as many of the resources private as possible, applies. If your users will access your CircleCI server installation via VPN, there is no need to assign any public IP addresses at all, as long as you have a working NAT gateway setup. Otherwise, you will need at least one public subnet for the CircleCI server Traefik load balancer.

However, in this case, it is also recommended to place Nomad clients and VMs in a public subnet to enable your users to SSH into jobs and scope access via networking rules.

Currently, custom subnetting is not supported for GCP. Custom subnetting support will be available in a future update/release.

Network Traffic

This section spells out the minimum requirements that are needed for a server installation to work. Depending on your workloads, you might need to add additional rules to egress for Nomad clients and VMs. Nomenclature between cloud providers differs, therefore, you will probably need to implement these rules using firewall rules and/or security groups.

Where you see "external," this usually means all external IPv4 addresses. Depending on your particular setup, you might be able to be more specific (e.g., if you are using a proxy for all external traffic).

It is assumed that you have configured the load balancers for Nomad, vm-service and output processor to be internal load balancers - this is the default.

The rules spelled out here are assumed to be stateful and for TCP connections only, unless stated otherwise. If you are working with stateless rules, you will need to create matching ingress or egress rules to the ones listed here.

Kubernetes Load Balancers

Depending on your setup, your load balancers might be transparent (i.e. they are not treated as a distinct layer in your networking topology). In this case, you can apply the rules from this section directly to the underlying destination or source of the network traffic. Refer to the documentation of your cloud provider to make sure you understand how to correctly apply networking security rules given the type of load balancing you are using with your installation.

Ingress

If the traffic rules for your load balancers have not been created automatically, here are their respective ports:

名前 Port ソース 目的

*-server-traefik

80

External

User Interface & Frontend API

*-server-traefik

443

External

User Interface & Frontend API

vm-service

3000

Nomad clients

Communication with Nomad clients

nomad

4647

Nomad clients

Communication with Nomad clients

output-processor

8585

Nomad clients

Communication with Nomad clients

Egress

The only type of egress needed is TCP traffic to the K8s nodes on the K8s load balancer ports (30000-32767). This is not needed if your load balancers are transparent.

Common Rules for Compute Instances

These rules apply to all compute instances, but not to the load balancers.

Ingress

If you want to access your instances via SSH, you will need to open port 22 for TCP connections for the instances in question. It is recommended to scope the rule as closely as possible to allowed source IPs and/or only add such a rule ad hoc when needed.

Egress

You most likely want all of your instances to access internet resources. This will require you to allow egress for UDP and TCP on port 53 to the DNS server within your VPC, as well as TCP ports 80 and 443 for HTTP and HTTPS traffic, respectively. Instances building jobs (i.e. the Nomad clients and external VMs) also will likely need to pull code from your VCS via SSH (TCP port 22). SSH is also used to communicate with external VMs, so it should be allowed for all instances with the destination of the VM subnet and your VCS at the very least.

Kubernetes Nodes

Intra-node traffic

The traffic within your K8s cluster is regulated by networking policies by default. For most purposes, this should be sufficient to regulate the traffic between pods and there is no additional requirement to pare down traffic between K8s nodes any further (i.e. it is fine to allow all traffic between K8s nodes).

To make use of networking policies within your cluster, you may need to take additional steps, depending on your cloud provider and setup. Here are some resources to get you started:

Ingress

If you are using a managed service, you can check the rules created for the traffic coming from the load balancers and the allowed port range. The standard port range for K8s load balancers (30000-32767) should be all that is needed here for ingress. If you are using transparent load balancers, you will need to apply the ingress rules listed for load balancers above.

Egress

Port Destination 目的

2376

VMs

Communication with VMs

4647

Nomad clients

Communication with the Nomad clients

all traffic

other nodes

Allow intra-cluster traffic

Nomad クライアント

Nomad clients do not need to communicate with each other; you can block traffic between Nomad client instances completely.

Ingress

Port ソース 目的

4647

K8s nodes

Communication with Nomad server

64535-65535

External

Rerun jobs with SSH functionality

Egress

Port Destination 目的

2376

VMs

Communication with VMs

3000

VM Service load balancers

内部通信

4647

Nomad Load Balancer

内部通信

8585

Output Processor Load Balancer

内部通信

外部ポート

Similar to Nomad clients, there is no need for external VMs to communicate with each other.

Ingress

Port ソース 目的

22

K8s nodes

内部通信

22

Nomad clients

内部通信

2376

K8s nodes

内部通信

2376

Nomad clients

内部通信

64535-65535

External

Rerun jobs with SSH functionality

Egress

You will only need the egress rules for internet access and SSH for your VCS.



Help make this document better

This guide, as well as the rest of our docs, are open-source and available on GitHub. We welcome your contributions.