1 - Bootstrapping

A guide for bootstrapping Sidero management plane

Introduction

Imagine a scenario in which you have shown up to a datacenter with only a laptop and your task is to transition a rack of bare metal machines into an HA management plane and multiple Kubernetes clusters created by that management plane. In this guide, we will go through how to create a bootstrap cluster using a Docker-based Talos cluster, provision the management plane, and pivot over to it. Guides around post-pivoting setup and subsequent cluster creation should also be found in the “Guides” section of the sidebar.

Because of the design of Cluster API, there is inherently a “chicken and egg” problem with needing a Kubernetes cluster in order to provision the management plane. Talos Systems and the Cluster API community have created tools to help make this transition easier.

Prerequisites

First, you need to install the latest talosctl by running the following script:

curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-$(uname -s | tr "[:upper:]" "[:lower:]")-amd64
chmod +x /usr/local/bin/talosctl

You can read more about Talos and talosctl at talos.dev.

Next, there are two big prerequisites involved with bootstrapping Sidero: routing and DHCP setup.

From the routing side, the laptop from which you are bootstrapping must be accessible by the bare metal machines that we will be booting. In the datacenter scenario described above, the easiest way to achieve this is probably to hook the laptop onto the server rack’s subnet by plugging it into the top-of-rack switch. This is needed for TFTP, PXE booting, and for the ability to register machines with the bootstrap plane.

DHCP configuration is needed to tell the metal servers what their “next server” is when PXE booting. The configuration of this is different for each environment and each DHCP server, thus it’s impossible to give an easy guide. However, here is an example of the configuration for an Ubiquti EdgeRouter that uses vyatta-dhcpd as the DHCP service:

This block shows the subnet setup, as well as the extra “subnet-parameters” that tell the DHCP server to include the ipxe-metal.conf file.

$ show service dhcp-server shared-network-name MetalDHCP

 authoritative enable
 subnet 192.168.254.0/24 {
     default-router 192.168.254.1
     dns-server 192.168.1.200
     lease 86400
     start 192.168.254.2 {
         stop 192.168.254.252
     }
     subnet-parameters "include "/config/ipxe-metal.conf";"
 }

Here is the ipxe-metal.conf file.

$ cat /config/ipxe-metal.conf

allow bootp;
allow booting;

next-server 192.168.1.150;
if exists user-class and option user-class = "iPXE" {
  filename "http://192.168.1.150:8081/boot.ipxe";
} elsif substring (option vendor-class-identifier, 0, 10) = "HTTPClient" {
  option vendor-class-identifier "HTTPClient";
  filename "http://192.168.1.150:8081/tftp/ipxe.efi";
} else {
  filename "ipxe.efi";
}

host talos-mgmt-0 {
    fixed-address 192.168.254.2;
    hardware ethernet d0:50:99:d3:33:60;
}

Notice that it sets a static address for the management node that I’ll be booting, in addition to providing the “next server” info. This “next server” IP address will match references to PUBLIC_IP found below in this guide.

Create a Local Cluster

The talosctl CLI tool has built-in support for spinning up Talos in docker containers. Let’s use this to our advantage as an easy Kubernetes cluster to start from.

Set an environment variable called PUBLIC_IP which is the “public” IP of your machine. Note that “public” is a bit of a misnomer. We’re really looking for the IP of your machine, not the IP of the node on the docker bridge (ex: 192.168.1.150).

export PUBLIC_IP="192.168.1.150"

We can now create our Docker cluster. Issue the following to create a single-node cluster:

talosctl cluster create \
  -p 69:69/udp,8081:8081/tcp,9091:9091/tcp,50100:50100/tcp \
  --workers 0 \
  --endpoint $PUBLIC_IP

Note that there are several ports mentioned in the command above. These allow us to access the services that will get deployed on this node.

Once the cluster create command is complete, issue talosctl kubeconfig /desired/path to fetch the kubeconfig for this cluster. You should then set your KUBECONFIG environment variable to the path of this file.

Untaint Control Plane

Because this is a single node cluster, we need to remove the “NoSchedule” taint on the node to make sure non-controlplane components can be scheduled.

kubectl taint node talos-default-master-1 node-role.kubernetes.io/master:NoSchedule-

Install Sidero

As of Cluster API version 0.3.9, Sidero is included as a default infrastructure provider in clusterctl.

To install Sidero and the other Talos providers, simply issue:

clusterctl init -b talos -c talos -i sidero

Patch Components

We will now want to ensure that the Sidero services that got created are publicly accessible across our subnet. This will allow the metal machines to speak to these services later.

Patch the Metadata Server

Update the metadata server component with the following patches:

## Update args to use 9091 for port
kubectl patch deploy -n sidero-system sidero-metadata-server --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args", "value": ["--port=9091"]}]'

## Tweak container port to match
kubectl patch deploy -n sidero-system sidero-metadata-server --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/ports", "value": [{"containerPort": 9091,"name": "http"}]}]'

## Use host networking
kubectl patch deploy -n sidero-system sidero-metadata-server --type='json' -p='[{"op": "add", "path": "/spec/template/spec/hostNetwork", "value": true}]'

Patch the Metal Controller Manager

## Update args to specify the api endpoint to use for registration
kubectl patch deploy -n sidero-system sidero-controller-manager --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/1/args", "value": ["--api-endpoint='$PUBLIC_IP'","--metrics-addr=127.0.0.1:8080","--enable-leader-election"]}]'

## Use host networking
kubectl patch deploy -n sidero-system sidero-controller-manager --type='json' -p='[{"op": "add", "path": "/spec/template/spec/hostNetwork", "value": true}]'

Register the Servers

At this point, any servers on the same network as Sidero should PXE boot using the Sidero PXE service. To register a server with Sidero, simply turn it on and Sidero will do the rest. Once the registration is complete, you should see the servers registered with kubectl get servers:

$ kubectl get servers -o wide
NAME                                   HOSTNAME        ACCEPTED   ALLOCATED   CLEAN
00000000-0000-0000-0000-d05099d33360   192.168.254.2   false      false       false

Accept the Servers

Note in the output above that the newly registered servers are not accepted. In order for a server to be eligible for consideration, it must be marked as accepted. Before a Server is accepted, no write action will be performed against it. Servers can be accepted by issuing a patch command like:

kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type='json' -p='[{"op": "replace", "path": "/spec/accepted", "value": true}]'

For more information on server acceptance, see the server docs.

Create the Default Environment

We must now create an Environment in our bootstrap cluster. An environment is a CRD that tells the PXE component of Sidero what information to return to nodes that request a PXE boot after completing the registration process above. Things that can be controlled here are kernel flags and the kernel and init images to use.

To create a default environment that will use the latest published Talos release, issue the following:

cat <<EOF | kubectl apply -f -
apiVersion: metal.sidero.dev/v1alpha1
kind: Environment
metadata:
  name: default
spec:
  kernel:
    url: "https://github.com/talos-systems/talos/releases/latest/download/vmlinuz-amd64"
    sha512: ""
    args:
      - initrd=initramfs.xz
      - page_poison=1
      - slab_nomerge
      - slub_debug=P
      - pti=on
      - random.trust_cpu=on
      - ima_template=ima-ng
      - ima_appraise=fix
      - ima_hash=sha512
      - console=tty0
      - console=ttyS1,115200n8
      - earlyprintk=ttyS1,115200n8
      - panic=0
      - printk.devkmsg=on
      - talos.platform=metal
      - talos.config=http://$PUBLIC_IP:9091/configdata?uuid=
  initrd:
    url: "https://github.com/talos-systems/talos/releases/latest/download/initramfs-amd64.xz"
    sha512: ""
EOF

Create Server Class

We must now create a server class to wrap our servers we registered. This is necessary for using the Talos control plane provider for Cluster API. The qualifiers needed for your server class will differ based on the data provided by your registration flow. See the server class docs for more info on how these work.

Here is an example of how to apply the server class once you have the proper info:

cat <<EOF | kubectl apply -f -
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
metadata:
  name: default
spec:
  qualifiers:
    cpu:
      - manufacturer: Intel(R) Corporation
        version: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
EOF

In order to fetch hardware information, you can use

kubectl get server -o yaml

Note that for bare-metal setup, you would need to specify an installation disk. See the Installation Disk

Once created, you should see the servers that make up your server class appear as “available”:

$ kubectl get serverclass
NAME      AVAILABLE                                  IN USE
default   ["00000000-0000-0000-0000-d05099d33360"]   []

Create Management Plane

We are now ready to template out our management plane. Using clusterctl, we can create a cluster manifest with:

clusterctl config cluster management-plane -i sidero > management-plane.yaml

Note that there are several variables that should be set in order for the templating to work properly:

  • CONTROL_PLANE_ENDPOINT and CONTROL_PLANE_PORT: The endpoint (IP address or hostname) and the port used for the Kubernetes API server (e.g. for https://1.2.3.4:6443: CONTROL_PLANE_ENDPOINT=1.2.3.4 and CONTROL_PLANE_PORT=6443). This is the equivalent of the endpoint you would specify in talosctl gen config. There are a variety of ways to configure a control plane endpoint. Some common ways for an HA setup are to use DNS, a load balancer, or BGP. A simpler method is to use the IP of a single node. This has the disadvantage of being a single point of failure, but it can be a simple way to get running.
  • CONTROL_PLANE_SERVERCLASS: The server class to use for control plane nodes.
  • WORKER_SERVERCLASS: The server class to use for worker nodes.
  • KUBERNETES_VERSION: The version of Kubernetes to deploy (e.g. v1.19.4).
  • CONTROL_PLANE_PORT: The port used for the Kubernetes API server (port 6443)
  • TALOS_VERSION: This should correspond to the minor version of Talos that you will be deploying (e.g. v0.9). This value is used in determining the fields present in the machine configuration that gets generated for Talos nodes.

For instance:

export CONTROL_PLANE_SERVERCLASS=master
export WORKER_SERVERCLASS=worker
export KUBERNETES_VERSION=v1.20.1
export CONTROL_PLANE_PORT=6443
export CONTROL_PLANE_ENDPOINT=1.2.3.4
clusterctl config cluster management-plane -i sidero > management-plane.yaml

In addition, you can specify the replicas for control-plane & worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects. Also, they can be scaled if needed:

kubectl get taloscontrolplane
kubectl get machinedeployment
kubectl scale taloscontrolplane management-plane-cp --replicas=3

Now that we have the manifest, we can simply apply it:

kubectl apply -f management-plane.yaml

NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.

Once the management plane is setup, you can fetch the talosconfig by using the cluster label. Be sure to update the cluster name and issue the following command:

kubectl get talosconfig \
  -l cluster.x-k8s.io/cluster-name=<CLUSTER NAME> \
  -o yaml -o jsonpath='{.items[0].status.talosConfig}' > management-plane-talosconfig.yaml

With the talosconfig in hand, the management plane’s kubeconfig can be fetched with talosctl --talosconfig management-plane-talosconfig.yaml kubeconfig

Pivoting

Once we have the kubeconfig for the management cluster, we now have the ability to pivot the cluster from our bootstrap. Using clusterctl, issue:

clusterctl init --kubeconfig=/path/to/management-plane/kubeconfig -i sidero -b talos -c talos

Followed by:

clusterctl move --to-kubeconfig=/path/to/management-plane/kubeconfig

Upon completion of this command, we can now tear down our bootstrap cluster with talosctl cluster destroy and begin using our management plane as our point of creation for all future clusters!

2 - Building A Management Plane with ISO Image

A guide for bootstrapping Sidero management plane using the ISO image

This guide will provide some very basic detail about how you can also build a Sidero management plane using the Talos ISO image instead of following the Docker-based process that we detail in our Getting Started tutorials.

Using the ISO is a perfectly valid way to build a Talos cluster, but this approach is not recommended for Sidero as it avoids the “pivot” step detailed here. Skipping this step means that the management plane does not become “self-hosted”, in that it cannot be upgraded and scaled using the Sidero processes we follow for workload clusters. For folks who are willing to take care of their management plane in other ways, however, this approach will work fine.

The rough outline of this process is very short and sweet, as it relies on other documentation:

  • For each management plane node, boot the ISO and install Talos using the “apply-config” process mentioned in our Talos Getting Started docs. These docs go into heavy detail on using the ISO, so they will not be recreated here.

  • With a Kubernetes cluster now in hand (and with access to it via talosctl and kubectl), you can simply pickup the Getting Started tutorial at the “Install Sidero” section here. Keep in mind, however, that you will be unable to do the “pivoting” section of the tutorial, so just skip that step when you reach the end of the tutorial.

Note: It may also be of interest to view the prereq guides on CLI and DHCP setup, as they will still apply to this method.

  • For long-term maintenance of a management plane created in this way, refer to the Talos documentation for upgrading Kubernetes and Talos itself.

3 - Decommissioning Servers

A guide for decommissioning servers

This guide will detail the process for removing a server from Sidero. The process is fairly simple with a few pieces of information.

  • For the given server, take note of any serverclasses that are configured to match the server.

  • Take note of any clusters that make use of aforementioned serverclasses.

  • For each matching cluster, edit the cluster resource with kubectl edit cluster and set .spec.paused to true. Doing this ensures that no new machines will get created for these servers during the decommissioning process.

  • If the server is already part of a cluster (kubectl get serverbindings should provide this info), you can now delete the machine that corresponds with this server via kubectl delete machine <machine_name>.

  • With the machine deleted, Sideo will reboot the machine and wipe its disks.

  • Once the disk wiping is complete and the server is turned off, you can finally delete the server from Sidero with kubectl delete server <server_name>.

4 - Creating Your First Cluster

A guide for creating your first cluster with the Sidero management plane

Introduction

This guide will detail the steps needed to provision your first bare metal Talos cluster after completing the bootstrap and pivot steps detailed in the previous guide. There will be two main steps in this guide: reconfiguring the Sidero components now that they have been pivoted and the actual cluster creation.

Reconfigure Sidero

Patch Services

In this guide, we will convert the metadata service to a NodePort service and the other services to use host networking. This is also necessary because some protocols like TFTP don’t allow for port configuration. Along with some nodeSelectors and a scale up of the metal controller manager deployment, creating the services this way allows for the creation of DNS names that point to all management plane nodes and provide an HA experience if desired. It should also be noted, however, that there are many options for acheiving this functionality. Users can look into projects like MetalLB or KubeRouter with BGP and ECMP if they desire something else.

Metal Controller Manager:

## Use host networking
kubectl patch deploy -n sidero-system sidero-controller-manager --type='json' -p='[{"op": "add", "path": "/spec/template/spec/hostNetwork", "value": true}]'

Metadata Server:

## Convert metadata server service to nodeport
kubectl patch service -n sidero-system sidero-metadata-server --type='json' -p='[{"op": "replace", "path": "/spec/type", "value": "NodePort"}]'

## Set a known nodeport for metadata server
kubectl patch service -n sidero-system sidero-metadata-server --type='json' -p='[{"op": "replace", "path": "/spec/ports", "value": [{"port": 80, "protocol": "TCP", "targetPort": "http", "nodePort": 30005}]}]'

Update Environment

The metadata server’s information needs to be updated in the default environment. Edit the environment with kubectl edit environment default and update the talos.config kernel arg with the IP of one of the management plane nodes (or the DNS entry you created) and the nodeport we specified above (30005).

Update DHCP

The DHCP options configured in the previous guide should now be updated to point to your new management plane IP or to the DNS name if it was created.

A revised ipxe-metal.conf file looks like:

allow bootp;
allow booting;

next-server 192.168.254.2;
if exists user-class and option user-class = "iPXE" {
  filename "http://192.168.254.2:8081/boot.ipxe";
} else {
  if substring (option vendor-class-identifier, 15, 5) = "00000" {
    # BIOS
    if substring (option vendor-class-identifier, 0, 10) = "HTTPClient" {
      option vendor-class-identifier "HTTPClient";
      filename "http://192.168.254.2:8081/tftp/undionly.kpxe";
    } else {
      filename "undionly.kpxe";
    }
  } else {
    # UEFI
    if substring (option vendor-class-identifier, 0, 10) = "HTTPClient" {
      option vendor-class-identifier "HTTPClient";
      filename "http://192.168.254.2:8081/tftp/ipxe.efi";
    } else {
      filename "ipxe.efi";
    }
  }
}

host talos-mgmt-0 {
   fixed-address 192.168.254.2;
   hardware ethernet d0:50:99:d3:33:60;
}

Register the Servers

At this point, any servers on the same network as Sidero should PXE boot using the Sidero PXE service. To register a server with Sidero, simply turn it on and Sidero will do the rest. Once the registration is complete, you should see the servers registered with kubectl get servers:

$ kubectl get servers -o wide
NAME                                   HOSTNAME        ACCEPTED   ALLOCATED   CLEAN
00000000-0000-0000-0000-d05099d33360   192.168.254.2   false      false       false

Accept the Servers

Note in the output above that the newly registered servers are not accepted. In order for a server to be eligible for consideration, it must be marked as accepted. Before a Server is accepted, no write action will be performed against it. Servers can be accepted by issuing a patch command like:

kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type='json' -p='[{"op": "replace", "path": "/spec/accepted", "value": true}]'

For more information on server acceptance, see the server docs.

Create the Cluster

The cluster creation process should be identical to what was detailed in the previous guide. Note that, for this example, the same “default” serverclass that we used in the previous guide is used again. Using clusterctl, we can create a cluster manifest with:

clusterctl config cluster workload-cluster -i sidero > workload-cluster.yaml

Note that there are several variables that should be set in order for the templating to work properly:

  • CONTROL_PLANE_ENDPOINT and CONTROL_PLANE_PORT: The endpoint (IP address or hostname) and the port used for the Kubernetes API server (e.g. for https://1.2.3.4:6443: CONTROL_PLANE_ENDPOINT=1.2.3.4 and CONTROL_PLANE_PORT=6443). This is the equivalent of the endpoint you would specify in talosctl gen config. There are a variety of ways to configure a control plane endpoint. Some common ways for an HA setup are to use DNS, a load balancer, or BGP. A simpler method is to use the IP of a single node. This has the disadvantage of being a single point of failure, but it can be a simple way to get running.
  • CONTROL_PLANE_SERVERCLASS: The server class to use for control plane nodes.
  • WORKER_SERVERCLASS: The server class to use for worker nodes.
  • KUBERNETES_VERSION: The version of Kubernetes to deploy (e.g. v1.19.4).
  • TALOS_VERSION: This should correspond to the minor version of Talos that you will be deploying (e.g. v0.9). This value is used in determining the fields present in the machine configuration that gets generated for Talos nodes. Note that the default is currently v0.8.

Now that we have the manifest, we can simply apply it:

kubectl apply -f workload-cluster.yaml

NOTE: The templated manifest above is meant to act as a starting point. If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.

Once the workload cluster is setup, you can fetch the talosconfig with a command like:

kubectl get talosconfig -o yaml workload-cluster-cp-xxx -o jsonpath='{.status.talosConfig}' > workload-cluster-talosconfig.yaml

Then the workload cluster’s kubeconfig can be fetched with talosctl --talosconfig workload-cluster-talosconfig.yaml kubeconfig /desired/path.

5 - Patching

A guide describing patching

Server resources can be updated by using the configPatches section of the custom resource. Any field of the Talos machine config can be overridden on a per-machine basis using this method. The format of these patches is based on JSON 6902 that you may be used to in tools like kustomize.

Any patches specified in the server resource are processed by the Metal Metadata Server before it returns a Talos machine config for a given server at boot time.

A set of patches may look like this:

apiVersion: metal.sidero.dev/v1alpha1
kind: Server
metadata:
  name: 00000000-0000-0000-0000-d05099d33360
spec:
  configPatches:
    - op: replace
      path: /machine/install
      value:
        disk: /dev/sda
    - op: replace
      path: /cluster/network/cni
      value:
        name: "custom"
        urls:
          - "http://192.168.1.199/assets/cilium.yaml"

Testing Configuration Patches

While developing config patches it is usually convenient to test generated config with patches before actual server is provisioned with the config.

This can be achieved by querying the metadata server endpoint directly:

$ curl http://$PUBLIC_IP:9091/configdata?uuid=$SERVER_UUID
version: v1alpha1
...

Replace $PUBLIC_IP with the Sidero IP address and $SERVER_UUID with the name of the Server to test against.

If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched exists in the config. If it doesn’t exist, create it with the op: add above the op: replace patch.

Combining Patches from Multiple Sources

Config patches might be combined from multiple sources (Server, ServerClass), which is explained in details in Metadata section.

6 - Upgrading

A guide describing upgrades

Upgrading a running workload cluster or management plane is the same process as describe in the Talos documentation.

To upgrade the Talos OS, see here.

In order to upgrade Kubernetes itself, see here.

Upgrading Talos 0.8 -> 0.9

It is important, however, to take special consideration for upgrades of the Talos v0.8.x series to v0.9.x. Because of the move from self-hosted control plane to static pods, some certificate information has changed that needs to be manually updated. The steps are as follows:

  • Upgrade a single control plane node to the v0.9.x series using the upgrade instructions above. upgrade

  • After upgrade, carry out a talosctl convert-k8s to move from the self-hosted control plane to static pods.

  • Targeting the upgraded node, issue talosctl read -n <node-ip> /system/state/config.yaml and copy out the cluster.aggregatorCA and cluster.serviceAccount sections.

  • In the management cluster, issue kubectl edit secret <cluster-name>-talos.

  • While in editing view, copy the data.certs field and decode it with echo '<certs-content>' | base64 -d

Note: It may also be a good idea to copy the secret in its entirety as a backup. This can be done with a simple kubectl get secret <cluster-name>-talos -o yaml.

  • Copying the output above to a text editor, update the aggregator and service account sections with the certs and keys copied previously and save it. The resulting file should look like:
admin:
  crt: xxx
  key: xxx
etcd:
  crt: xxx
  key: xxx
k8s:
  crt: xxx
  key: xxx
k8saggregator:
  crt: xxx
  key: xxx
k8sserviceaccount:
  key: xxx
os:
  crt: xxx
  key: xxx
  • Re-encode the data with cat <saved-file> | base64 | tr -d '\n'

  • With the secret still open for editing, update the data.certs field to contain the new base64 data.

  • Edit the cluster’s TalosControlPlane resource with kubectl edit tcp <name-of-control-plane>. Update the spec.controlPlaneConfig.[controlplane,init].talosVersion fields to be v0.9.

  • Edit any TalosConfigTemplate resources and update spec.template.spec.talosVersion to be the same value.

  • At this point, any new controlplane or worker machines should receive the newer machine config format and join the cluster successfully. You can also proceed to upgrade existing nodes.

7 - Provisioning Flow

Diagrams for various flows in Sidero.
graph TD;
    Start(Start);
    End(End);

    %% Decisions

    IsOn{Is server is powered on?};
    IsRegistered{Is server is registered?};
    IsAccepted{Is server is accepted?};
    IsClean{Is server is clean?};
    IsAllocated{Is server is allocated?};

    %% Actions

    DoPowerOn[Power server on];
    DoPowerOff[Power server off];
    DoBootAgentEnvironment[Boot agent];
    DoBootEnvironment[Boot environment];
    DoRegister[Register server];
    DoWipe[Wipe server];

    %% Chart

    Start-->IsOn;
    IsOn--Yes-->End;
    IsOn--No-->DoPowerOn;

    DoPowerOn--->IsRegistered;

    IsRegistered--Yes--->IsAccepted;
    IsRegistered--No--->DoBootAgentEnvironment-->DoRegister;

    DoRegister-->IsRegistered;

    IsAccepted--Yes--->IsAllocated;
    IsAccepted--No--->End;

    IsAllocated--Yes--->DoBootEnvironment;
    IsAllocated--No--->IsClean;
    IsClean--No--->DoWipe-->DoPowerOff;

    IsClean--Yes--->DoPowerOff;

    DoBootEnvironment-->End;

    DoPowerOff-->End;

Installation Flow

graph TD;
    Start(Start);
    End(End);

    %% Decisions

    IsInstalled{Is installed};

    %% Actions

    DoInstall[Install];
    DoReboot[Reboot];

    %% Chart

    Start-->IsInstalled;
    IsInstalled--Yes-->End;
    IsInstalled--No-->DoInstall;

    DoInstall-->DoReboot;

    DoReboot-->IsInstalled;