• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

Kubernetes at Home: Set Up Your Own Cluster Using Vagrant & Ansible

Lomanu4 Оффлайн

Lomanu4

Команда форума
Администратор
Регистрация
1 Мар 2015
Сообщения
1,481
Баллы
155
TL;DR: Learn how to build a lightweight Kubernetes cluster on your local machine using Vagrant, Ansible, and VMware Fusion. Perfect for ARM-based Mac users looking to experiment with Kubernetes in a reproducible environment.
Why This Guide?


Setting up a Kubernetes cluster manually can be complex and time-consuming. But with the power of Vagrant for managing virtual machines and Ansible for automating setup tasks, you can spin up a local cluster with minimal effort and maximum reproducibility. This tutorial walks you through building a two-node cluster on macOS with ARM chips (e.g., M1/M2), but it's adaptable to other setups.

Table of Contents

  • Prerequisites
  • Project Structure
  • Step 1: Configure Vagrant
  • Step 2: Configure Ansible
  • Step 3: Ansible Playbook for Kubernetes Setup
  • Step 4: Test and Deploy the Cluster
  • What's Next?
  • Useful links
Prerequisites


Before we start, ensure you have the following installed:

For macOS ARM users, refer to this special setup guide:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

Project Structure


Create a project directory and replicate the following structure:


.
├── ansible
│ ├── ansible.cfg
│ ├── inventory.ini
│ └── k8s-cluster-setup.yml
└── Vagrantfile
Step 1: Configure Vagrant


Install the VMware Desktop plugin:


vagrant plugin install vagrant-vmware-desktop

Next, reserve two static private IPs from your LAN. Scan your local network using:


nmap -sn 192.168.1.0/24
Replace 192.168.1.0 with you LAN network IP
Update your Vagrantfile with something like this:


Vagrant.configure("2") do |config|
config.vm.define "kubmaster" do |kub|
kub.vm.box = "spox/ubuntu-arm"
kub.vm.box_version = "1.0.0"
kub.vm.hostname = 'kubmaster'
kub.vm.provision "docker"
kub.vm.network "public_network", ip: "192.168.1.101", bridge: "en0: Wifi"
kub.vm.provider "vmware_desktop" do |v|
v.allowlist_verified = true
v.gui = false
v.vmx["memsize"] = "4096"
v.vmx["numvcpus"] = "2"
end
end

config.vm.define "kubnode1" do |kubnode|
kubnode.vm.box = "spox/ubuntu-arm"
kubnode.vm.box_version = "1.0.0"
kubnode.vm.hostname = 'kubnode1'
kubnode.vm.provision "docker"
kubnode.vm.network "public_network", ip: "192.168.1.102", bridge: "en0: Wifi"
kubnode.vm.provider "vmware_desktop" do |v|
v.allowlist_verified = true
v.gui = false
v.vmx["memsize"] = "4096"
v.vmx["numvcpus"] = "2"
end
end
end
Replace the IPs with ones that match your LAN.
Now bring up the VMs:


vagrant up
Step 2: Configure Ansible

ansible/inventory.ini


[master]
kubmaster ansible_host=192.168.1.101 ansible_ssh_private_key_file=.vagrant/machines/kubmaster/vmware_desktop/private_key

[workers]
kubnode1 ansible_host=192.168.1.102 ansible_ssh_private_key_file=.vagrant/machines/kubnode1/vmware_desktop/private_key

[all:vars]
ansible_user=vagrant
Make sure to replace the IP addresses
ansible/ansible.cfg


[defaults]
inventory = inventory.ini
host_key_checking = False
Step 3: Ansible Playbook for Kubernetes Setup

ansible/k8s-cluster-setup.yml


This playbook performs the following:

  • Prepares all nodes: disables swap, installs required packages, configures kernel modules, adds K8s repositories, installs kubeadm, kubelet, and kubectl.
  • Initializes the master node and stores the cluster join command.
  • Sets up Flannel CNI for networking.
  • Joins worker nodes using the generated join command.

---
- name: Prepare Kubernetes Nodes
hosts: all
become: yes

tasks:

- name: Disable swap (runtime)
command: swapoff -a
when: ansible_swaptotal_mb > 0

- name: Comment out swap line in /etc/fstab
lineinfile:
path: /etc/fstab
regexp: '^\s*([^#]\S*\s+\S+\s+swap\s+\S+)\s*$'
line: '# \1'
backrefs: yes

- name: Apply mount changes
command: mount -a

- name: Stop AppArmor
systemd:
name: apparmor
state: stopped
enabled: no

- name: Restart containerd
systemd:
name: containerd
state: restarted

- name: Configure sysctl for Kubernetes
copy:
dest: /etc/sysctl.d/kubernetes.conf
content: |
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

- name: Apply sysctl settings
command: sysctl --system

- name: Install transport and curl
apt:
name:
- apt-transport-https
- curl
update_cache: yes
state: present

- name: Add Kubernetes APT key
shell: |
curl -fsSL

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

| sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
args:
creates: /etc/apt/keyrings/kubernetes-apt-keyring.gpg

- name: Add Kubernetes APT repository
apt_repository:
repo: "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

/"
filename: "kubernetes"
state: present

- name: Install Kubernetes components
apt:
name:
- kubelet
- kubeadm
- kubectl
- kubernetes-cni
update_cache: yes
state: present

- name: Enable kubelet service
systemd:
name: kubelet
enabled: yes

- name: Initialize Kubernetes Master
hosts: master
become: yes
vars:
pod_cidr: "10.244.0.0/16"

tasks:

- name: Remove default containerd config
file:
path: /etc/containerd/config.toml
state: absent

- name: Restart containerd
systemd:
name: containerd
state: restarted
enabled: yes

- name: Wait for containerd socket to be available
wait_for:
path: /run/containerd/containerd.sock
state: present
timeout: 20

- name: Initialize Kubernetes control plane
command: kubeadm init --apiserver-advertise-address={{ ansible_host }} --node-name {{ inventory_hostname }} --pod-network-cidr={{ pod_cidr }}
register: kubeadm_output
args:
creates: /etc/kubernetes/admin.conf

- name: Extract join command
shell: |
kubeadm token create --print-join-command
register: join_command
changed_when: false

- name: Set join command fact
set_fact:
kube_join_command: "{{ join_command.stdout }}"

- name: Create .kube directory for vagrant user
become_user: vagrant
file:
path: /home/vagrant/.kube
state: directory
mode: 0755

- name: Copy Kubernetes admin config to vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/config
remote_src: yes
owner: vagrant
group: vagrant
mode: 0644

- name: Configure networking
hosts: all
become: yes

tasks:

- name: Ensure br_netfilter loads at boot
copy:
dest: /etc/modules-load.d/k8s.conf
content: |
br_netfilter

- name: Load br_netfilter kernel module now
command: modprobe br_netfilter

- name: Configure sysctl for Kubernetes networking
copy:
dest: /etc/sysctl.d/k8s.conf
content: |
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

- name: Apply sysctl settings
command: sysctl --system

- name: Configure flannel
hosts: master
become: yes

tasks:

- name: Apply Flannel CNI plugin
become_user: vagrant
command: kubectl apply -f

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.


environment:
KUBECONFIG: /home/vagrant/.kube/config


- name: Join worker nodes to cluster
hosts: workers
become: yes
vars:
kube_join_command: "{{ hostvars['kubmaster']['kube_join_command'] }}"

tasks:
- name: Remove default containerd config
file:
path: /etc/containerd/config.toml
state: absent

- name: Restart containerd
systemd:
name: containerd
state: restarted
enabled: yes
- name: Wait until the Kubernetes API server is reachable
wait_for:
host: "{{ hostvars['kubmaster']['ansible_host'] }}"
port: 6443
delay: 10
timeout: 120
state: started

- name: Join the node to Kubernetes cluster
command: "{{ kube_join_command }}"
args:
creates: /etc/kubernetes/kubelet.conf
Step 4: Test and Deploy the Cluster

Test Ansible SSH connectivity:


ansible all -m ping -i ansible/inventory.ini
Run the full cluster setup:


ansible-playbook -i ansible/inventory.ini ansible/k8s-cluster-setup.yml
Retrieve the kubeconfig file locally:


vagrant ssh kubmaster -c "sudo cat /etc/kubernetes/admin.conf" > ~/kubeconfig-vagrant.yaml
Test your cluster:


KUBECONFIG=~/kubeconfig-vagrant.yaml kubectl get nodes

You should see both the master and worker nodes in Ready status.

What's Next?


You now have a fully functioning local Kubernetes cluster on ARM-based hardware, with everything automated and reproducible. You can now:

  • Experiment with Helm charts
  • Try GitOps with ArgoCD
  • Deploy sample apps

Stay tuned for the next part of this series!

Useful Links



Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу