You must perform tasks on multiple servers. You log-in to each server carefully and work. What if you have thousands of servers? SaltStack is the way to go.
You must perform certain tasks on multiple servers. You log-in to each one and complete these tasks carefully. Perhaps you might want to undertake more complicated operations, like installing software and configuring it based on your specific criteria. What if you have hundreds or thousands of servers? Imagine logging onto them one by one and carrying out these tasks? THERE IS A BETTER WAY! SaltStack deployments is it.
SaltStack is a distributed remote execution and configuration management system used to run commands on targeted nodes. It’s used for massive scale deployments following the client-server model (master-minion ) over a secure and encrypted protocol. It’s a command line tool written in Python and is lightweight as far as resources and requirements are concerned. SaltStack uses the Python ZeroMQ library to accomplish these tasks at high speed. It is open sourced under Apache-2 license and boasts a productive and vibrant community.
Goals
Set up SaltStack master and minion. Provision a virtual machine and install a sample package on it.
Installation and Configuration
Installation of SaltStack can be done by following this Documentation. In this tutorial, I am going to show a set-up of a salt-master and two salt-minion nodes with Oracle-Linux version 6.5 installed on them.
Environment
Operating System: Oracle Linux 6.5
Salt Master: salt-master
Salt Minion: minion1.com, minion2.com
[root@salt-master ~]# yum install salt-master
Salt configuration files are stored in /etc/salt, /srv/salt. It’s better to keep verbose logging for troubleshooting purposes.
[root@salt-master ~]# vim /etc/salt/master
The default setting is ‘warning’ change to the following.
log_level: debug log_level_logfile: debug
Restart salt master for changes to take effect:
[root@salt-master ~]# salt-master -d
Install salt-minion on both the target nodes and configure its file to talk with salt-master.
[root@minion1 ~]# yum install salt-minion
[root@minion2 ~]# yum install salt-minion
Install salt-minion on both the target nodes and configure its file to talk with salt-master.
Edit the file /etc/salt/minion. Locate and un-comment the line “#master:salt”
and replace with‘salt’
FQDN or IP of the salt-master server.
[root@minion1 ~] vim /etc/salt/minion
Restart the salt-minion service:
service salt-minion restart
Do similarly on minion node 2 as well.
[root@minion2 ~] vim /etc/salt/minion
Restart the salt-minion service:
service salt-minion restart
Add minion’s private key to salt-master. -L is used to list keys and -A is for accepting.
[root@salt-master ~]# salt-key -F master
Local Keys: master.pem: 44:55:66:77:cc:aa:bb:ff:ee:32:7:83:82:81:63:99 master.pub: 22:76:54:ee:83:99:97:98:ed:cd:bb:ff:85:82:66:77 Accepted Keys: minion1.com: aa:bb:cc:dd:1d:75:81:ee:ff:92:22:22:11:xx:yy:zz Unaccepted Keys: minion2.com: zz:yy:xx:dd:1d:75:81:ee:ff:92:22:22:11:aa:bb:cc [root@salt-master ~]# salt-key --finger minion2.com Unaccepted Keys: minion2.com: zz:yy:xx:dd:1d:75:81:ee:ff:92:22:22:11:aa:bb:cc [root@salt-master ~]# salt-key -L Accepted Keys: minion1.com Denied Keys: Unaccepted Keys: minion2.com Rejected Keys:
Accept keys by running the following command.
[root@salt-master ~]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys: minion2.com Proceed? [n/Y] Y Key for minion minion2.com accepted.
You can get json output format as well using the following command:
[root@salt-master ~]# salt-key -L --out=json
{ "minions_rejected": [], "minions_denied": [], "minions_pre": [], "minions": [ "minion2.com", "minion1.com" ] }
[root@salt-master ~]# salt --versions-report
Salt Version:
Salt: 2015.8.1
Dependency Versions:
Dependency Versions: Jinja2: 2.2.1 M2Crypto: 0.20.2 Mako: Not Installed PyYAML: 3.11 PyZMQ: 14.5.0 Python: 2.6.6 (r266:84292, Jul 23 2015, 05:13:40) RAET: Not Installed Tornado: 4.2.1 ZMQ: 4.0.5 cffi: Not Installed cherrypy: Not Installed dateutil: 1.4.1 gitdb: Not Installed gitpython: Not Installed ioflo: Not Installed libnacl: Not Installed msgpack-pure: Not Installed msgpack-python: 0.4.6 mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pygit2: Not Installed python-gnupg: Not Installed smmap: Not Installed timelib: Not Installed System Versions: dist: oracle 6.5 ‘salt’ machine: x86_64 release: 2.6.32-431.29.2.el6.x86_64 system: Oracle Linux Server 6.5
Now that you have salt-master and salt-minion, and they trust each other, you can check the connections by issuing test.ping
command. This will return “True” if they communicate with each other.
[root@salt-master ~]# salt minion2.com test.ping minion2.com: True
Salt command syntax involves the salt command, target, action.
[root@salt-master ~]# salt '*' cmd.run "service httpd restart"
You can run any available command on any connected and authenticated minion.
Now let’s look at the Configuration Management section:
Salt’s configuration files and directives are kept within /srv/salt by default. This is the place where all configuration files you want to copy to target minions reside. Salt will not touch your master’s configurations files, so don’t worry. Salt follows YAML syntax for template files. To enable configuration management functionality, you need to edit the salt-master file once again. Open /etc/salt/master file and locate the line that refers to file_roots and uncomment it to look like this:
file_roots: base: - /srv/salt
Depending upon how you installed salt, you may have to create this directory /srv/salt/.
The base configuration lies within /srv/salt directory with filename top.sls. This file provides mappings for other files and can be used be to set base configuration for all servers.
Let us write some simple checks to see whether user name ‘bob’ is present on a target node or not. If it’s not, then create it.
user_bob.sls file resides under /srv/salt/base directory
[root@salt-master base]# salt minion1.com state.sls user_bob minion1.com: ---------- ID: user_bob Function: user.present Name: bob Result: True Comment: User bob is present and up to date Started: 07:42:31.339705 Duration: 25.963 ms Changes: Summary for minion1.com ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1 Total run time: 25.963 ms
Now we will install PCRE (Perl Compatible Regular Expression) package on target node-2 i.e minion2.com. The general method of installing it is via configure-compile-install. Develop a shell script that will compile and install PCRE. The “base” directory contains shell script and a top.sls file with relevant information. With all the .sls files in place and configuration files ready to go, the last step is to tell salt to configure your nodes remotely. state.highstate triggers this synchronization. I always use state.show_highstate before running state.highstate because it tells what will happen on the target nodes.
[root@salt-master base]# salt 'minion2.com' state.show_highstate
minion2.com: ---------- /root/pcre_binary: ---------- __env__: base __sls__: pcre-8-37 file: |_ ---------- user: root |_ ---------- group: root |_ ---------- mode: 755 |_ ---------- makedirs: True - directory |_ ---------- order: 10000 configure-pcre: ---------- __env__: base __sls__: pcre-8-37 cmd: - cwd:/root/pcre_binary/pcre-8.37 |_ ---------- names: - ./configure - make - make install - run |_ ---------- require: |_ ---------- cmd: extract-pcre |_ ---------- order: 10003 extract-pcre: ---------- __env__: base __sls__: pcre-8-37 cmd: |_ ---------- cwd: /root/pcre_binary |_ ---------- names: - tar zxvf pcre-8.37.tar.gz - run |_ ---------- require: |_ ---------- file: pcre-8.37 |_ ---------- order: 10002 pcre-8-37: ---------- __env__: base __sls__: pcre-8-37 file: - name:/opt/pcre-8.37.tar.gz |_ ---------- source: - salt://opt/pcre-8.37.tar.gz - managed |_ ---------- order: 10001
Now execute with state.highstate.
[root@salt-master base]# salt 'minion2.com' state.highstate
minion2.com: ----------. ID: pcre Function: file.managed Name: /pcre-8.37.tar.gz Result: True Comment: File /pcre-8.37.tar.gz updated Started: 01:02:54.117150 Duration: 116.626 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: extract-pcre Function: cmd.run Name: cd / /bin/tar xvf pcre-8.37.tar.gz cd pcre-8.37 ./configure make make install Result: True Comment: Command "cd / /bin/tar xvf pcre-8.37.tar.gz cd pcre-8.37 ./configure make make install " run Started: 01:02:54.245660 Duration: 24757.312 ms Changes: ---------- pid: 4155 retcode: 0 stderr: libtool: warning: relinking 'libpcreposix.la' libtool: warning: relinking 'libpcrecpp.la' stdout: pcre-8.37/ pcre-8.37/pcre_scanner.h < … output truncated ... > Summary for minion1.com ------------ Succeeded: 2 (changed=2) Failed: 0 ------------ Total states run: 2 Total run time: 24.874 s
This way we see that PCRE package is installed under /root directory on the target node.
Summary
We just walked through the power of storing configurations as text files and executing them remotely on target nodes. Salt does not execute these states sequentially, which means that if a failure occurs in one node, it will continue to install on another node, and so on. One can thereby set up a deployment procedure. The whole system is suitable for any network and is capable of automated software distribution and network provisioning.
There are other IT automation tools like Chef/Puppet/Ansible/Fabric which execute similar tasks. Chef and Puppet are a bit complicated with their initial setup and running is what I have already experienced and also heard from a person I respect. It’s up to you to decide which language works best for you and your team. You are going to write initial configuration details until you turn your Infrastructure into a code. Visit the Salt project page for more information, and yes you can post your doubts/queries on mailing-lists or jump over IRC for a quick friendly response.

