This lets you create, modify and destroy ZeroTiernetworks and members through Terraform.
Since this isn't maintained by Hashicorp, you have to install it manually. Thereare two main ways:
Download and unzip the latestrelease.
Then, move the binary to your terraform plugins directory. Thedocsdon't fully describe where this is.
~/.terraform.d/plugins/darwin_amd64
~/.terraform.d/plugins/linux_amd64
$APPDATA\terraform.d\plugins\windows_amd64
Install Go v1.12+ on your machine, clone the source,and let make install
do the rest.
brew install go # or upgrade
git clone https://github.com/cormacrelf/terraform-provider-zerotier
cd terraform-provider-zerotier
make install
# it may take a while to download `hashicorp/terraform`. be patient.
Install go 1.12+ from your favourite package manager or from source. Then:
git clone https://github.com/cormacrelf/terraform-provider-zerotier
cd terraform-provider-zerotier
make install
# it may take a while to download `hashicorp/terraform`. be patient.
In PowerShell, running as Administrator:
choco install golang
choco install zip
choco install git # for git-bash
choco install make
In a shell that has Make, like Git-Bash:
git clone https://github.com/cormacrelf/terraform-provider-zerotier
cd terraform-provider-zerotier
make install
# it may take a while to download `hashicorp/terraform`. be patient.
Before you can use a new provider, you must run terraform init
in yourproject, where the root .tf
file is.
Use export ZEROTIER_API_KEY="..."
, or define it in a provider block:
provider "zerotier" {
api_key = "..."
## Optinal: Override for DIY controller
## Could be overriden by ZEROTIER_CONTROLLER_URL env var or this block
## Defaults to https://my.zerotier.com/api when not provided
# controller_url = "https://my.zerotier.com/api"
}
To achieve a similar configuration to the ZeroTier default, do this:
variable "zt_cidr" { default = "10.0.96.0/24" }
resource "zerotier_network" "your_network" {
name = "your_network_name"
# auto-assign v4 addresses to devices
assignment_pool {
cidr = "${var.zt_cidr}"
}
# route requests to the cidr block on each device through zerotier
route {
target = "${var.zt_cidr}"
}
}
If you don't specify either an assignment pool or a managed route, while it'sperfectly valid, your network won't be very useful, so try to do both.
You can have more than one assignment pool, and more than one route. Multipleroutes are useful for connecting two networks together, like so:
variable "zt_cidr" { default = "10.96.0.0/24" }
variable "other_network" { default = "10.41.0.0/24" }
locals {
# the first address is reserved for the gateway
gateway_ip = "${cidrhost(var.zt_cidr, 1)}" # eg 10.96.0.1
}
resource "zerotier_network" "your_network" {
name = "your_network_name"
assignment_pool {
first = "${cidrhost(var.zt_cidr, 2)}" # eg 10.96.0.2
last = "${cidrhost(var.zt_cidr, -2)}" # eg 10.96.0.254
}
route {
target = "${var.zt_cidr}"
}
route {
target = "${var.other_network}"
via = "${local.gateway_ip}"
}
}
Then go ahead and make an API call on your gateway's provisioner to set the IPaddress manually. See below (auto-joining).
Best of all, you can specify rules just like in the web interface. You could even use a Terraform template_file
to insert variables.
# ztr.conf
# drop non-v4/v6/arp traffic
drop not ethertype ipv4 and not ethertype arp and not ethertype ipv6;
# disallow tcp connections except by specific grant in a capability break chr tcp_syn and not chr tcp_ack;
# allow ssh from some devices
cap ssh
id 1000
accept ipprotocol tcp and dport 22;
;
# allow everything else
accept;
resource "zerotier_network" "your_network" {
name = "your_network_name"
assignment_pool {
cidr = "${var.zt_cidr}"
}
route {
target = "${var.zt_cidr}"
}
rules_source = "${file("ztr.conf")}"
}
Unfortunately, it is not possible for a machine to be added to a network withoutthe machine itself reaching out to ZeroTier.
However, you can pre-approve a machine if you already know its Node ID. This isnot the case for dynamically created machines like cloud instances. It ismore useful for your developer machine, which you might want to give a bunch ofcapabilities and pre-approve so that when you do paste a network ID in, youdon't have to use the web UI to do the rest.
Basic example to pre-approve:
resource "zerotier_member" "dev_machine" {
node_id = "..."
network_id = "${zerotier_network.net.id}"
name = "dev machine"
}
Full list of properties:
resource "zerotier_member" "hector" {
# required: the known id that a particular machine shows
# (e.g. in the Mac menu bar app, or the Windows tray, Linux CLI output)
node_id = "a1511e5bf5"
# required: the network id
network_id = "${zerotier_network.net.id}"
# the rest are optional
name = "hector"
description = "..."
authorized = true
# whether to show it in the list in the Web UI
hidden = false
# e.g.
# cap administrator
# id 1000
# accept;
# ;
capabilities = [ 1000 ]
# e.g.
# tag department
# id 2000
# enum 100 marketing
# enum 200 accounting
# ;
tags = {
"2000" = 100 # marketing
}
# default (false) means this member has a managed IP address automatically assigned.
# without ip_assignments being configured, the member won't have any managed IPs.
no_auto_assign_ips = false
# will happily override any auto-assigned v4 addresses (and v6 in some configurations)
ip_assignments = [
"10.0.96.15"
]
# not known whether this does anything or not
offline_notify_delay = 0
# see ZeroTier Manual section on L2/ethernet bridging
allow_ethernet_bridging = true
}
Things are simple when you already know your Node ID. A local-exec
provisionercan be used to execute sudo zerotier-cli join [nwid]
when a network iscreated, which will be auto-approved using a zerotier_member
resource. Youwill have to type your password (once) during terraform apply
, or you willhave to apply as root already.
The provisioner should be defined on a null_resource
that is triggered whenthe network ID changes. That way you can re-join by marking the null resource asdeleted, without deleting the entire network.
If you had another machine nearby (like a CI box), you could also run join
onit using SSH or similar. Or just accept the one-off menial task.
resource "zerotier_network" "net" { ... }
resource "zerotier_member" "dev_machine" {
network_id = "${zerotier_network.net.id}"
node_id = "... (see above)"
name = "dev machine"
capabilities = [ 1000, 2000 ]
}
resource "null_resource" "joiner" {
triggers {
network_id = "${zerotier_network.net.id}"
}
provisioner "local-exec" {
command = "sudo zerotier-cli join ${zerotier_network.net.id}"
}
}
Using zerotier-cli join XXX
doesn't require an API key, but that member won'tbe approved by default. On the other hand, the zerotier_member
resource cannotforce a machine to join, it can only (pre-)approve and (pre-)configure membership ofa machine whose Node ID is already known. This is not true of a dynamicallycreated instance on a cloud provider.
The solution is to pass in the key to a provisioner and use the ZeroTier RESTAPI directly to do it from the instance itself. This is the basic pattern, andapplies whether you're using Terraform provisioners, running Docker entrypointscripts with environment variables, or running Ansible scripts (etc).
Any way you do it, you will need to have your ZT API key accessible to Terraform.Provide the environment variable export TF_VAR_zerotier_api_key="..."
so youcan access the key outside the provider definition, and do something like this:
variable "zerotier_api_key" {}
provider "zerotier" {
api_key = "${var.zerotier_api_key}"
}
resource "zerotier_network" "example" {
# ...
}
You might then insert "${var.zerotier_api_key}"
intoa kubernetes_secret
resource, or anaws_ssm_parameter
, or directly into a provisioner as a scriptargument. To use a standard Terraform provisioner, do this:
resource "aws_instance" "web" {
provisioner "file" {
source = "join.sh"
destination = "/tmp/join.sh"
}
provisioner "remote-exec" {
inline = [
"sudo sh /tmp/join.sh ${var.zerotier_api_key} ${var.zerotier_network.example.id}"
]
}
}
Note the sudo
. join.sh
is like the following:
ZT_API_KEY="$1"
ZT_NET="$2"
# basically follow this guide
# https://www.zerotier.com/download.shtml
curl -s 'https://pgp.mit.edu/pks/lookup?op=get&search=0x1657198823E52A61' | gpg --import && \
if z=$(curl -s 'https://install.zerotier.com/' | gpg); then echo "$z" | sudo bash; fi
zerotier-cli join "$ZT_NET"
sleep 5
NODE_ID=$(zerotier-cli info | awk '{print $3}')
echo '{"config":{"authorized":true}}' | curl -X POST -H 'Authorization: Bearer $ZT_API_KEY' -d @- \
"https://my.zerotier.com/api/network/$ZT_NET/member/$NODE_ID"
You could even set a static IP there, by POSTing the following instead. This isuseful if you want the instance to act as a gateway with a known IP, like in themultiple routes example above. Or any field from the ZeroTier APIReference listing for POST /api/network/{networkId}/member/{nodeId}
.
{
"name": "a-single-tear",
"config": {
"authorized": true,
"ipAssignments": ["10.96.0.1"],
"capabilities": [ 1000, 2000 ],
"tags": [ [2000, 100] ]
}
}
If you:
route
block... then you're almost ready to replace a VPN gateway. This can be cheaperand more flexible, and you can probably get by on a t2.nano
. The onlymissing pieces are packet forwarding from ZT to VPC, and getting packets backout.
It is preferable to set up your VPC route tables to route the ZeroTier CIDRthrough your instance. If you have zero NAT, this means you will never haveany trouble with strange protocols, and you squeeze more performance out ofthe t2.nano
you set up. To be fair, on a t2.nano
you are limited muchmore by its limited link speed than anything else, and protocols that don'tsupport NAT are rare in primarily TCP/HTTP/ environments. NAT can be simplerto set up if you have a lot of dynamically created subnets.
The main configuration difference of this approach from route table entries is that thesource IP is different for the security group rule evaluator. With plainpacket forwarding and a route table return, you need an ingress rule for yourzerotier CIDR on a service in your VPC. Say ingress tcp/80, 10.96.0.0/24
. This isnot more powerful, but equally as easy with Terraform. You can't controlwhere ZT assigns members within the assignment pools, and you would probablyregulate that with your ZT rules/capabilities anyway. With MASQUERADE, youinstead allow ingress from the gateway's security group.
Assuming the following:
networks:
zerotier = 10.96.0.0/24
aws vpc = 10.41.0.0/16
interfaces:
you = { zt0: 10.96.0.37 }
gateway = { zt0: 10.96.0.1, eth0: 10.41.1.15 }
ec2 in vpc = { eth0: 10.41.2.67 }
You'll need to enable Linux kernel IPV4 forwarding. Use your distro's versionof echo 1 > /proc/sys/net/ipv4/ip_forward
, and make it permanent byediting/appending to /etc/sysctl.conf
. On Ubuntu, that's:
# requires sudo
# set up packet forwarding now
echo 1 > /proc/sys/net/ipv4/ip_forward
# make it permanent
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
It's a very good idea to have some FORWARD rules either way you do therouting, otherwise the gateway might be too useful as a nefarious pivot pointinto, inside or outbound from your VPC.
# requires sudo
iptables -F
# packets flow freely from zt to vpc
iptables -A FORWARD -i zt0 -o eth0 -s "$ZT_CIDR" -d "$VPC_CIDR" -j ACCEPT
# only allow stateful return in the other direction
# i.e. can't establish new outbound connections going the other way
iptables -A FORWARD -i eth0 -o zt0 -s "$VPC_CIDR" -d "$ZT_CIDR" -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -j REJECT
Load both of these scripts with your provisioner, whatever that may be, and run them as root.
You want packets to move like so:
in:
1. you.zt0(src=10.96.0.37, dest=10.41.2.67) => ZT => gateway.zt0
2. -> gateway.eth0(src=10.96.0.37, dest=10.41.2.67) => VPC normal => ec2.eth0
out:
3. ec2.eth0(src=10.41.2.67, dest=10.96.0.37) => VPC (through route table entry) => gateway.eth0
4. -> gateway.zt0(src=10.41.2.67, dest=10.96.0.37) => ZT => you.zt0
For packet forwarding, set source_dest_check = false
on the instance.
data "aws_route_table" "private" {
subnet_id = "..."
}
resource "aws_route" "zt_route" {
route_table_id = "${data.aws_route_table.private.id}"
# route all packets destined for zt network, send them through the gateway
destination_cidr_block = "${var.zt_cidr}"
instance_id = "${aws_instance.zt_gateway.id}"
}
You'll need a gateway security group with:
Any other ec2 instances you want to access from your ZT network will need:
The gateway behaves like a standard router, using iptables
MASQUERADE rules. 'You' sees exactly the same src,dest information on the packets; it looks like you are communicating directly with 10.41.2.67, but the 'ec2.eth0' interface sees packets coming from the gateway.
in:
1. you.zt0(src=10.96.0.37, dest=10.41.2.67) => ZT => gateway.zt0
2. -> gateway.eth0(src=10.41.1.15, dest=10.41.2.67) => VPC => ec2.eth0
out:
3. ec2.eth0(src=10.41.2.67, dest=10.41.1.15) => VPC => gateway.eth0
4. -> gateway.zt0(src=10.41.2.67, dest=10.96.0.37) => ZT => you.zt0
iptables
MASQUERADE rule on the gatewayAppend this to your FORWARD rules script above:
# zt0 is the zerotier virtual interface, eth0 is connected to the VPC
iptables -t nat -F
# make it look like packets are coming from the gateway, not a zerotier IP
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -j ACCEPT
You'll need a gateway security group with:
Any other ec2 instances you want to access from your ZT network will need:
Terraform is able to import existing infrastructure.This allows you take resources you've created by some other means and bring it under Terraform management.
It is possible to import both networks and members state using this provider.Currently, Terraform features can only import the state information, and you still need to write the resource definition before it is able to import.
Given the following definition:
provider "zerotier" {
api_key = "..."
}
resource "zerotier_network" "your_network" {
name = "your_network_name"
}
resource "zerotier_member" "dev_machine" {
node_id = "..."
network_id = "${zerotier_network.your_network.id}"
}
You be able to import both the network
and the member
resources using the terraform import
command.
## Resource is identified by the network id by the API
terraform import zerotier_network.your_network ${NETWORK_ID}
## Resource is identified by the network id and the node id by the API
terraform import zerotier_member.dev_machine "${NETWORK_ID}-${NODE_ID}"
## Adjust the configuration until no change is planned
terraform plan
Terraform Provider for AWS Website: terraform.io Tutorials: learn.hashicorp.com Forum: discuss.hashicorp.com Chat: gitter Mailing List: Google Groups The Terraform AWS provider is a plugin for Terrafo
Databricks Terraform Provider AWS tutorial| Azure tutorial| End-to-end tutorial| Migration from 0.2.x to 0.3.x| Changelog| Authentication| databricks_aws_s3_mount| databricks_aws_assume_role_policy da
Kubernetes Provider for Terraform Getting Started Interactive Tutorial Usage Documentation Examples Kubernetes Provider 2.0 Upgrade guide Mailing list: Google Groups Chat: #terraform-providers in Kube
重要提示:Terraform 所在的 HashiCorp 公司宣布,不允许中国境内使用该公司旗下的企业版的产品和软件(开源版本不受影响)。 Terraform 是一个安全和高效的用来构建、更改和合并基础架构的工具。采用 Go 语言开发。Terraform 可管理已有的流行的服务,并提供自定义解决方案。 Terraform 的关键特性: 架构就是代码 执行计划 资源图 变更自动化
As noted throughout this documentation, Flarum uses Laravel's service container (or IoC container) for dependency injection. Service Providers allow low-level configuration and modification of the Fla
自动化部署使用 Terraform 在 Digital Ocean 上创建服务器,然后 Ansible 在这些服务器上创建和管理测试网络。 安装 注意:请参阅集成 bash 脚本,它可以在一个新的 DO 液滴上运行,并将自动启动一个 4 节点的测试网络。脚本或多或少完成了下面描述的所有工作。 在 Linux 机器上安装 Terraform 和 Ansible。 创建一个带读写能力的 Digital