Discussion:
Usign ansible (was: another old box to update)
(too old to reply)
Alan McKinnon
2015-01-08 18:29:10 UTC
Permalink
Copy pasted into new thread as subject has changed:


Stefan said:
===
played around with ansible today and managed to get this working:

http://blog.jameskyle.org/2014/08/automated-stage3-gentoo-install-using-ansible/

I even forked his repo and added a load of features to my newly built
gentoo-systems (systemd, git, german locale, chrony ...). Nice.

I still have to come up with a proper directory tree ... the docs at

http://docs.ansible.com/playbooks_best_practices.html

show a tree that seems a bit huge for my needs.

One of my next goals: get all my local hosts into it and see how to
manage them.
===


The directory layout in the best practice page is indeed way more than
you need, it lists most of the directories in common use across a wide
array of deployments. In reality you create just the directories you need.

Global stuff goes in the top level (like inventory).
Variables for groups and individual hosts go into suitably named files
inside group_vars and host_vars.
Roles have a definite structure, in practice you'll use tasks/ and
templates/ a lot, everything else only when you need them.

This is a good design I feel. If a file describes variables, you don't
have to tag it as such or explicitly include it anywhere. Instead, files
inside a *vars/ directory contain variables, the system knows when to
use them based on the name of the file. It's really stunningly obvious
once you train your brain to stop thinking in terms of complexity :-)



--
Alan McKinnon
***@gmail.com
Stefan G. Weichinger
2015-01-09 09:25:00 UTC
Permalink
Am 08.01.2015 um 19:29 schrieb Alan McKinnon:

> The directory layout in the best practice page is indeed way more than
> you need, it lists most of the directories in common use across a wide
> array of deployments. In reality you create just the directories you need.
>
> Global stuff goes in the top level (like inventory).
> Variables for groups and individual hosts go into suitably named files
> inside group_vars and host_vars.
> Roles have a definite structure, in practice you'll use tasks/ and
> templates/ a lot, everything else only when you need them.
>
> This is a good design I feel. If a file describes variables, you don't
> have to tag it as such or explicitly include it anywhere. Instead, files
> inside a *vars/ directory contain variables, the system knows when to
> use them based on the name of the file. It's really stunningly obvious
> once you train your brain to stop thinking in terms of complexity :-)

Thanks a lot ... I spent some time with it already and learn to like it ;)

I am nearly done with setting up an inventory file for all the customer
boxes I am responsible for. Just using the ad-hoc-commands is very
useful already!

For example I could store the output of the "setup" module for local
reference ... this gives me loads of basic information.

I know it is not a backup program but I think I could also use it to
rsync all the /etc directories to my ansible host? Or trigger a "git
push" on the remote machines to let them push their configs up to some
central git-repo I provide here (having /etc and the @world-file is
quite a good start here and then ... ).

It is also great to be able to check for let's say
shellshock-vulnerability by adding a playbook and running it to all/some
of the servers out there ... I am really starting to come up with lots
of ideas!

My current use case will be more of an inventory to track all the boxes
... deploying stuff out to them seems not so easy in my slightly
heterogeneous "zoo". But this can lead to a more standardized setup, sure.

One question:

As far as I see the hostname in the inventory does not have to be
unique? I have some firewalls out there without a proper FQDN, so there
are several "pfsense" lines in various groups (I have now groups in
there with the name of the [customer] and some of them have child groups
like [customer-sambas] ...).

I would like to be able to also access all the ipfires or sambas in
another group ... so I would have to list them again in that group
[ipfires] ?

Thanks for the great hint to ansible, looking great so far!
Stefan
Alan McKinnon
2015-01-09 17:38:33 UTC
Permalink
On 09/01/2015 11:25, Stefan G. Weichinger wrote:
> Am 08.01.2015 um 19:29 schrieb Alan McKinnon:
>
>> The directory layout in the best practice page is indeed way more than
>> you need, it lists most of the directories in common use across a wide
>> array of deployments. In reality you create just the directories you need.
>>
>> Global stuff goes in the top level (like inventory).
>> Variables for groups and individual hosts go into suitably named files
>> inside group_vars and host_vars.
>> Roles have a definite structure, in practice you'll use tasks/ and
>> templates/ a lot, everything else only when you need them.
>>
>> This is a good design I feel. If a file describes variables, you don't
>> have to tag it as such or explicitly include it anywhere. Instead, files
>> inside a *vars/ directory contain variables, the system knows when to
>> use them based on the name of the file. It's really stunningly obvious
>> once you train your brain to stop thinking in terms of complexity :-)
>
> Thanks a lot ... I spent some time with it already and learn to like it ;)
>
> I am nearly done with setting up an inventory file for all the customer
> boxes I am responsible for. Just using the ad-hoc-commands is very
> useful already!
>
> For example I could store the output of the "setup" module for local
> reference ... this gives me loads of basic information.
>
> I know it is not a backup program but I think I could also use it to
> rsync all the /etc directories to my ansible host? Or trigger a "git
> push" on the remote machines to let them push their configs up to some
> central git-repo I provide here (having /etc and the @world-file is
> quite a good start here and then ... ).


I think that is a perfectly valid approach, I just think of it slightly
differently: I don't use it as a backup program, rather I think of it as
a way to safely run the same command on multiple hosts. Whether you need
to use git, trigger backups or add an arbitrary user doesn't matter,
they are valid commands and ansible gives you a framework to run them
safely on multiple hosts in parallel[1]

And when you find yourself running the same ad-hoc command quite often,
you can always fold it into a playbook proper

>
> It is also great to be able to check for let's say
> shellshock-vulnerability by adding a playbook and running it to all/some
> of the servers out there ... I am really starting to come up with lots
> of ideas!
>
> My current use case will be more of an inventory to track all the boxes
> ... deploying stuff out to them seems not so easy in my slightly
> heterogeneous "zoo". But this can lead to a more standardized setup, sure.

Indeed. it encourages a "cattle not pets" (google it) way of thinking.
So your hosts may all be different, and you may end up with 10% more
packages than you really need, but you do get a model you can keep in
your head and be much more productive. And bill more hours :-)

>
> One question:
>
> As far as I see the hostname in the inventory does not have to be
> unique? I have some firewalls out there without a proper FQDN, so there
> are several "pfsense" lines in various groups (I have now groups in
> there with the name of the [customer] and some of them have child groups
> like [customer-sambas] ...).

An inventory is just an .ini file, so duplicate entries don't really
matter. IIRC later dupes just overwrite earlier ones. Internally, the
inventory is treated as a bunch of key-value pairs and it's the key that
ansible uses to name hosts it works with. The values tell it how to
contact the host.

What I do is list all my hosts at the top level of the inventory in some
sensible order, and just list the names in groups below that (see the
example below). If you don't explicitly provide an FQDN for a host, then
ansible uses the name you gave it and tries to ssh to that name. The
name can be something that resolves in DNS, something in /etc/hosts, or
an IP address, just like using ssh (as it really is ssh doing the hard
work under the hood)


> I would like to be able to also access all the ipfires or sambas in
> another group ... so I would have to list them again in that group
> [ipfires] ?

Yes. If the playbook says to run the play against a group, then you list
each host in the group. You can also make groups of groups so it's quite
easy to come up with a scheme that suits your setup.

Here's a piece of my inventory to illustrate:


# List all workstations here, including the ansible_* variables
# Assign each host to the relevant groups below
aadil-wks ansible_ssh_host=192.168.1.84
brandon-wks ansible_ssh_host=192.168.1.100
carmen-wks ansible_ssh_host=192.168.1.146

# List all servers here, including the ansible_* variables
# Assign each host to the relevant groups below
ppm-db-0 ansible_ssh_host=192.168.0.16
ppm-mail-0 ansible_ssh_host=192.168.0.14
ppm-preprod-0 ansible_ssh_host=192.168.0.12
ppm-www-0 ansible_ssh_host=192.168.0.20

[accounts-workstations]
aadil-wks
carmen-wks

[support-workstations]
brandon-wks

[web-servers]
ppm-www-0

[mysql-servers]
ppm-db-0

[workstations:children]
accounts-workstations
support-workstations

[servers:children]
web-servers
mysql-servers






Basically, you call each host by any name that makes sense and group
them how you want. It's the ansible_ssh_host attribute that tell ssh how
to connect














>
> Thanks for the great hint to ansible, looking great so far!
> Stefan



[1] I used to use clusterssh for this, but I'm gradually shifting my
headspace over to ansible ad-hoc commands. cssh is always impressive
(fire off 30 xterms across 3 HD monitors, all thee newbies are terrified
and your reputation in intact...) but ansible does remove a lot of noise
from your vision


--
Alan McKinnon
***@gmail.com
Tomas Mozes
2015-01-10 19:40:50 UTC
Permalink
On 2015-01-09 10:25, Stefan G. Weichinger wrote:
> Am 08.01.2015 um 19:29 schrieb Alan McKinnon:
>
>> The directory layout in the best practice page is indeed way more than
>> you need, it lists most of the directories in common use across a wide
>> array of deployments. In reality you create just the directories you
>> need.
>>
>> Global stuff goes in the top level (like inventory).
>> Variables for groups and individual hosts go into suitably named files
>> inside group_vars and host_vars.
>> Roles have a definite structure, in practice you'll use tasks/ and
>> templates/ a lot, everything else only when you need them.
>>
>> This is a good design I feel. If a file describes variables, you don't
>> have to tag it as such or explicitly include it anywhere. Instead,
>> files
>> inside a *vars/ directory contain variables, the system knows when to
>> use them based on the name of the file. It's really stunningly obvious
>> once you train your brain to stop thinking in terms of complexity :-)
>
> Thanks a lot ... I spent some time with it already and learn to like it
> ;)
>
> I am nearly done with setting up an inventory file for all the customer
> boxes I am responsible for. Just using the ad-hoc-commands is very
> useful already!
>
> For example I could store the output of the "setup" module for local
> reference ... this gives me loads of basic information.
>
> I know it is not a backup program but I think I could also use it to
> rsync all the /etc directories to my ansible host? Or trigger a "git
> push" on the remote machines to let them push their configs up to some
> central git-repo I provide here (having /etc and the @world-file is
> quite a good start here and then ... ).
>
> It is also great to be able to check for let's say
> shellshock-vulnerability by adding a playbook and running it to
> all/some
> of the servers out there ... I am really starting to come up with lots
> of ideas!
>
> My current use case will be more of an inventory to track all the boxes
> ... deploying stuff out to them seems not so easy in my slightly
> heterogeneous "zoo". But this can lead to a more standardized setup,
> sure.
>
> One question:
>
> As far as I see the hostname in the inventory does not have to be
> unique? I have some firewalls out there without a proper FQDN, so there
> are several "pfsense" lines in various groups (I have now groups in
> there with the name of the [customer] and some of them have child
> groups
> like [customer-sambas] ...).
>
> I would like to be able to also access all the ipfires or sambas in
> another group ... so I would have to list them again in that group
> [ipfires] ?
>
> Thanks for the great hint to ansible, looking great so far!
> Stefan

Ansible is a not a backup solution. You don't need to download your /etc
from the machines because you deploy your /etc to machines via ansible.

I was also thinking about putting /etc in git and then deploying it but:
- on updates, will you update all configurations in all /etc repos?
- do you really want to keep all the information in git, is it
necessary?

Opposite to this, you can define roles like apache, mysql,
common-gentoo, firewall etc. where you describe how to install that
software and do some basic configuration that is to be shared among the
most of your machines. You can also define "default" values (like the
bind address, the listen port) and then override it in your machine
group role (if it's used with multiple servers).

If you have all the software pieces written down in roles and you use
the defaults of that role, you simply get to that a server configuration
is just a compound of that roles (plus some configuration copy). Like
this an apache+php application server with firewall and centralized
logging is just like around 20-30 lines.

You don't need to use roles, you can put all this information in task
files and then you include this files, however then you don't have
encapsulation and the default values.

It really doesn't matter how your servers diverge, if you keep the
details split up in roles, you just cherry-pick the roles, overriding
their defaults and copying configuration. However it is true that
ansible tries to keep your configuration identical among server (for
example to not install apache absolutely differently on two machines,
but to use as much common pieces as possible).

Check out ansible galaxy and search for some roles (like apache, cron,
redis etc.). Regarding configuration, you can template or just copy the
configuration to a server (like when installing postfix for example) but
you need to keep the template up to date with the shipped configuration
(or ship your own configuration and diverge from mainstream). An
alternative is to just make some changes to the default configuration
(replace line, add some line). Then you don't need to update your
templates, on updates you can copy over the new configuration from
._cfgXXX and make the same changes as before, most probably it will
work. A good trick is to use include directories where possible (don't
edit /etc/sudoers but copy your stuff to /etc/sudoers.d).
Alan McKinnon
2015-01-10 22:11:08 UTC
Permalink
On 10/01/2015 21:40, Tomas Mozes wrote:


> Ansible is a not a backup solution. You don't need to download your /etc
> from the machines because you deploy your /etc to machines via ansible.
>
> I was also thinking about putting /etc in git and then deploying it but:
> - on updates, will you update all configurations in all /etc repos?
> - do you really want to keep all the information in git, is it necessary?

The set of fileS in /etc/ managed by ansible is always a strict subset
of everything in /etc

For that reason alone, it's a good idea to back up /etc anyway,
regardless of having a CM system in place. The smallest benefit is
knowing when things changed, by the cm SYSTEM or otherwise


--
Alan McKinnon
***@gmail.com
Tomas Mozes
2015-01-11 07:46:54 UTC
Permalink
On 2015-01-10 23:11, Alan McKinnon wrote:
> On 10/01/2015 21:40, Tomas Mozes wrote:
>
>
>> Ansible is a not a backup solution. You don't need to download your
>> /etc
>> from the machines because you deploy your /etc to machines via
>> ansible.
>>
>> I was also thinking about putting /etc in git and then deploying it
>> but:
>> - on updates, will you update all configurations in all /etc repos?
>> - do you really want to keep all the information in git, is it
>> necessary?
>
> The set of fileS in /etc/ managed by ansible is always a strict subset
> of everything in /etc
>
> For that reason alone, it's a good idea to back up /etc anyway,
> regardless of having a CM system in place. The smallest benefit is
> knowing when things changed, by the cm SYSTEM or otherwise

For what reason?

And how does a workflow look like then? You commit changes to your git
repo of ansible. Then you deploy via ansible and check the /etc of each
machine and commit a message that you changed something via ansible?
Alan McKinnon
2015-01-11 08:22:11 UTC
Permalink
On 11/01/2015 09:46, Tomas Mozes wrote:
> On 2015-01-10 23:11, Alan McKinnon wrote:
>> On 10/01/2015 21:40, Tomas Mozes wrote:
>>
>>
>>> Ansible is a not a backup solution. You don't need to download your /etc
>>> from the machines because you deploy your /etc to machines via ansible.
>>>
>>> I was also thinking about putting /etc in git and then deploying it but:
>>> - on updates, will you update all configurations in all /etc repos?
>>> - do you really want to keep all the information in git, is it
>>> necessary?
>>
>> The set of fileS in /etc/ managed by ansible is always a strict subset
>> of everything in /etc
>>
>> For that reason alone, it's a good idea to back up /etc anyway,
>> regardless of having a CM system in place. The smallest benefit is
>> knowing when things changed, by the cm SYSTEM or otherwise
>
> For what reason?

For the simple reason that ansible is not the only system that can make
changes in /etc

> And how does a workflow look like then? You commit changes to your git
> repo of ansible. Then you deploy via ansible and check the /etc of each
> machine and commit a message that you changed something via ansible?


When you commit to the ansible repo, you are committing and tracking
changes to the *ansible* config. You are not tracking changes to /etc on
the actual destination host, that is a separate problem altogether and
not directly related to the fact that ansible logs in and does various
stuff.

You can make your workflow whatever makes sense to you.

The reason I'm recommending to keep all of /etc in it's own repo is that
it's the simplest way to do it. /etc/ is a large mixture of
ansible-controlled files, sysadmin-controlled files, and other arbitrary
files installed by the package manager. It's also not very big, around
10M or so typically. So you *could* manually add to a repo every file
you change manually, but that is error-prone and easy to forget. Simpler
to just commit everything in /etc which gives you an independant record
of all changes over time. Have you ever dealt with a compliance auditor?
An independant change record that is separate from the CM itself is a
feature that those fellows really like a lot.




--
Alan McKinnon
***@gmail.com
Rich Freeman
2015-01-11 12:25:20 UTC
Permalink
On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon <***@gmail.com> wrote:
> The reason I'm recommending to keep all of /etc in it's own repo is that
> it's the simplest way to do it. /etc/ is a large mixture of
> ansible-controlled files, sysadmin-controlled files, and other arbitrary
> files installed by the package manager. It's also not very big, around
> 10M or so typically. So you *could* manually add to a repo every file
> you change manually, but that is error-prone and easy to forget. Simpler
> to just commit everything in /etc which gives you an independant record
> of all changes over time. Have you ever dealt with a compliance auditor?
> An independant change record that is separate from the CM itself is a
> feature that those fellows really like a lot.

If you're taking care of individual long-lived hosts this probably
isn't a bad idea.

If you just build a new host anytime you do updates and destroy the
old one then obviously a git repo in /etc won't get you far.

--
Rich
Stefan G. Weichinger
2015-01-11 15:12:43 UTC
Permalink
Am 11.01.2015 um 13:25 schrieb Rich Freeman:
> On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon <***@gmail.com> wrote:
>> The reason I'm recommending to keep all of /etc in it's own repo is that
>> it's the simplest way to do it. /etc/ is a large mixture of
>> ansible-controlled files, sysadmin-controlled files, and other arbitrary
>> files installed by the package manager. It's also not very big, around
>> 10M or so typically. So you *could* manually add to a repo every file
>> you change manually, but that is error-prone and easy to forget. Simpler
>> to just commit everything in /etc which gives you an independant record
>> of all changes over time. Have you ever dealt with a compliance auditor?
>> An independant change record that is separate from the CM itself is a
>> feature that those fellows really like a lot.
>
> If you're taking care of individual long-lived hosts this probably
> isn't a bad idea.
>
> If you just build a new host anytime you do updates and destroy the
> old one then obviously a git repo in /etc won't get you far.

I have long-lived hosts out there and with rather individual setups and
a wide range of age (= deployed over many years).

So my first goal is kind of getting an overview:

* what boxes am I responsible for?

* getting some kind of meta-info into my local systems -> /etc, @world,
and maybe something like the facts provided by "facter" module (a nice
kind of profile ... with stuff like MAC addresses and other essential
info) ... [1]

and then, as I learn my steps, I can roll out some homogenization:

* my ssh-keys really *everywhere*
* standardize things for each customers site (network setup, proxies)

etc etc

I am just cautious: rolling out standardized configs over dozens of
maybe different servers is a bit of a risk. But I think this will come
step by step ... new servers get the roles applied from the start, and
existing ones are maybe adapted to this when I do other update work.

And at keeping /etc in git:

So far I made it a habit to do that on customer servers. Keeping track
of changes is a good thing and helpful. I still wonder how to centralize
this as I would like to have these, let's call them "profiles" in my own
LAN as well. People tend to forget their backups etc ... I feel better
with a copy locally.

This leads to finding a structure of managing this.

The /etc-git-repos so far are local to the customer servers.
Sure, I can add remote repos and use ansible to push the content up there.

One remote-repo per server-machine? I want to run these remote-repos on
one of my inhouse-servers ...

For now I wrote a small playbook that allows me to rsync /etc and
world-file from all the Gentoo-boxes out there (and only /etc from
firewalls and other non-gentoo-machines).

As mentioned I don't have FQDNs for all hosts and this leads to the
problem that there are several lines like "ipfire" in several groups.

Rsyncing stuff into a path containing the hostname leads to conflicts:

- name: "sync /etc from remote host to inventory host"
synchronize: |
mode=pull
src=/etc
dest={{ local_storage_path }}/"{{ inventory_hostname
}}"/etc
delete=yes
recursive=yes


So I assume I should just setup some kind of talking names like:

[smith]
ipfire_smith ....

[brown]
ipfire_brown ....

... and use these just as "labels" ?

Another idea is to generate some kind of UUID for each host and use that?

----

I really like the ansible-approach so far.

Even when I might not yet run the full standardized approach as I have
to slowly get the existing hosts into this growing setup.

Stefan


[1] I haven't yet managed to store the output of the setup-module to
the inventory host. I could run "ansible -i hosts.yml -m setup all" but
I want a named txt-file per host in a separate subdir ...
Alan McKinnon
2015-01-11 16:36:46 UTC
Permalink
On 11/01/2015 17:12, Stefan G. Weichinger wrote:

> And at keeping /etc in git:
>
> So far I made it a habit to do that on customer servers. Keeping track
> of changes is a good thing and helpful. I still wonder how to centralize
> this as I would like to have these, let's call them "profiles" in my own
> LAN as well. People tend to forget their backups etc ... I feel better
> with a copy locally.
>
> This leads to finding a structure of managing this.
>
> The /etc-git-repos so far are local to the customer servers.
> Sure, I can add remote repos and use ansible to push the content up there.
>
> One remote-repo per server-machine? I want to run these remote-repos on
> one of my inhouse-servers ...
>
> For now I wrote a small playbook that allows me to rsync /etc and
> world-file from all the Gentoo-boxes out there (and only /etc from
> firewalls and other non-gentoo-machines).
>
> As mentioned I don't have FQDNs for all hosts and this leads to the
> problem that there are several lines like "ipfire" in several groups.
>
> Rsyncing stuff into a path containing the hostname leads to conflicts:
>
> - name: "sync /etc from remote host to inventory host"
> synchronize: |
> mode=pull
> src=/etc
> dest={{ local_storage_path }}/"{{ inventory_hostname
> }}"/etc
> delete=yes
> recursive=yes
>
>
> So I assume I should just setup some kind of talking names like:
>
> [smith]
> ipfire_smith ....
>
> [brown]
> ipfire_brown ....
>
> ... and use these just as "labels" ?
>
> Another idea is to generate some kind of UUID for each host and use that?


The trick is to use a system that guarantees you a unique "label" or
identifier for each host.

Perhaps {{ customer_name }}/{{ hostname }} works?

This would fail if you have two customers with the same company name
(rare, but not impossible) or customers have machines with the same name
(silly, but possible). In that case, you'd probably have to go with
UUIDs or similar.


--
Alan McKinnon
***@gmail.com
Stefan G. Weichinger
2015-01-12 15:10:56 UTC
Permalink
On 11.01.2015 17:36, Alan McKinnon wrote:

> The trick is to use a system that guarantees you a unique "label" or
> identifier for each host.
>
> Perhaps {{ customer_name }}/{{ hostname }} works?
>
> This would fail if you have two customers with the same company name
> (rare, but not impossible) or customers have machines with the same name
> (silly, but possible). In that case, you'd probably have to go with
> UUIDs or similar.

Where do I get

{{ customer_name }}

from?

Where to define or set?

example?
Alan McKinnon
2015-01-12 16:46:17 UTC
Permalink
On 12/01/2015 17:10, Stefan G. Weichinger wrote:
> On 11.01.2015 17:36, Alan McKinnon wrote:
>
>> The trick is to use a system that guarantees you a unique "label" or
>> identifier for each host.
>>
>> Perhaps {{ customer_name }}/{{ hostname }} works?
>>
>> This would fail if you have two customers with the same company name
>> (rare, but not impossible) or customers have machines with the same name
>> (silly, but possible). In that case, you'd probably have to go with
>> UUIDs or similar.
>
> Where do I get
>
> {{ customer_name }}
>
> from?
>
> Where to define or set?
>
> example?


You'd have to define it yourself in your plays somewhere

Several ways present themselves:

- Group customers together by customer name and use the group name.

- Define the customer directly in the inventory. Generally it isn't
recommended to define variable there, but I think this is one of the few
case where it does make sense. Sort of like this:

acme_web_server ansible_ssh_host=1.2.3.4 customer=acme

{{ customer }} then is available for that host whenever the host is in scope



One thing you'll find with ansible is there's always a way to do
something, often more than one way (like perl). And all ways often make
sense (unlike perl)


--
Alan McKinnon
***@gmail.com
Stefan G. Weichinger
2015-01-17 21:29:23 UTC
Permalink
On 12.01.2015 17:46, Alan McKinnon wrote:

> You'd have to define it yourself in your plays somewhere
>
> Several ways present themselves:
>
> - Group customers together by customer name and use the group name.
>
> - Define the customer directly in the inventory. Generally it isn't
> recommended to define variable there, but I think this is one of the few
> case where it does make sense. Sort of like this:
>
> acme_web_server ansible_ssh_host=1.2.3.4 customer=acme
>
> {{ customer }} then is available for that host whenever the host is in scope
>
>
>
> One thing you'll find with ansible is there's always a way to do
> something, often more than one way (like perl). And all ways often make
> sense (unlike perl)

sorry for the delay ... busy week

I still haven't wrapped my head around how to properly define my groups
and sets of hosts.. but I am on my way, thanks ;-)

"learning" ... when it is better to have a group or a when-clause (when
OS = Gentoo) ...

Reading/rsyncing all the configs in isn't first priority now. Although
it already is nice-to-have as a server stopped to work this week.

OK, backups on tape etc ... but I like that basic "profile" with /etc
and @world as well.

-

I hesitate to mention it as everything with the term "systemd" in it
seems to trigger not-directly-helpful replies here ... but I see issues
with playbooks/tasks controlling services on hosts running systemd:

I have openrc and systemd installed/merged (installed with openrc, then
migrated to run systemd and openrc still there, but not active ... not
removing openrc just to keep it as some fallback) ... and when I try to
control services via the service-module of ansible I don't always get
valid results. It seems that the module detects openrc installed and
doesn't check further (is openrc *active* as well?) ... so I get
misleading replies/states in.

IMO ansible should correctly detect the running PID1 .. and it tries to
as far as I understand the code of the service-module.

For example I tried to write a task to ensure that ntpd is down/disabled
and chrony is installed/enabled/started ... no real success so far.

I will provide more info if needed ... saturday night now, so excuse me
stopping here ;-)

Stefan
Alan McKinnon
2015-01-18 08:25:36 UTC
Permalink
On 17/01/2015 23:29, Stefan G. Weichinger wrote:
> On 12.01.2015 17:46, Alan McKinnon wrote:
>
>> You'd have to define it yourself in your plays somewhere
>>
>> Several ways present themselves:
>>
>> - Group customers together by customer name and use the group name.
>>
>> - Define the customer directly in the inventory. Generally it isn't
>> recommended to define variable there, but I think this is one of the few
>> case where it does make sense. Sort of like this:
>>
>> acme_web_server ansible_ssh_host=1.2.3.4 customer=acme
>>
>> {{ customer }} then is available for that host whenever the host is in scope
>>
>>
>>
>> One thing you'll find with ansible is there's always a way to do
>> something, often more than one way (like perl). And all ways often make
>> sense (unlike perl)
>
> sorry for the delay ... busy week
>
> I still haven't wrapped my head around how to properly define my groups
> and sets of hosts.. but I am on my way, thanks ;-)
>
> "learning" ... when it is better to have a group or a when-clause (when
> OS = Gentoo) ...

My advice:

Start with groups. If you find you need to have lots of "when" clauses
to make the plays work across more than one distro, and the whens follow
the same format, then you might want to split them into groups.
Make for example a "gentoo-www" group and a "debian-www" group, and
create a super-group "www" that includes both.

It's one of those questions you can only really answer once you've built
it for yourself and can see what works better in your environment


> Reading/rsyncing all the configs in isn't first priority now. Although
> it already is nice-to-have as a server stopped to work this week.
>
> OK, backups on tape etc ... but I like that basic "profile" with /etc
> and @world as well.
>
> -
>
> I hesitate to mention it as everything with the term "systemd" in it
> seems to trigger not-directly-helpful replies here ... but I see issues
> with playbooks/tasks controlling services on hosts running systemd:
>
> I have openrc and systemd installed/merged (installed with openrc, then
> migrated to run systemd and openrc still there, but not active ... not
> removing openrc just to keep it as some fallback) ... and when I try to
> control services via the service-module of ansible I don't always get
> valid results. It seems that the module detects openrc installed and
> doesn't check further (is openrc *active* as well?) ... so I get
> misleading replies/states in.
>
> IMO ansible should correctly detect the running PID1 .. and it tries to
> as far as I understand the code of the service-module.
>
> For example I tried to write a task to ensure that ntpd is down/disabled
> and chrony is installed/enabled/started ... no real success so far.

If ansible confuses installed init systems with running init system,
then that will be a bug in ansible and should be reported

>
> I will provide more info if needed ... saturday night now, so excuse me
> stopping here ;-)

:-)




--
Alan McKinnon
***@gmail.com
Stefan G. Weichinger
2015-01-19 10:11:47 UTC
Permalink
On 18.01.2015 09:25, Alan McKinnon wrote:

> My advice:
>
> Start with groups. If you find you need to have lots of "when"
> clauses to make the plays work across more than one distro, and the
> whens follow the same format, then you might want to split them into
> groups. Make for example a "gentoo-www" group and a "debian-www"
> group, and create a super-group "www" that includes both.
>
> It's one of those questions you can only really answer once you've
> built it for yourself and can see what works better in your
> environment

Yes, thanks!

>> IMO ansible should correctly detect the running PID1 .. and it
>> tries to as far as I understand the code of the service-module.
>>
>> For example I tried to write a task to ensure that ntpd is
>> down/disabled and chrony is installed/enabled/started ... no real
>> success so far.
>
> If ansible confuses installed init systems with running init system,
> then that will be a bug in ansible and should be reported

When I read

/usr/lib64/python2.7/site-packages/ansible/modules/core/system/service.py

I understand that it should detect the enabled systemd in line 403ff

but maybe it detects the wrong tool/binary to start/stop services when
both openrc and systemd are installed (442ff)

See this:

# ansible -i inventories/oops_nodes.yml -l hiro.local -m service -a
"name=chronyd state=started" all
hiro.local | FAILED >> {
"failed": true,
"msg": " * WARNING: chronyd is already starting\n"
}

That is ~ the same msg as in:

# /etc/init.d/chronyd start
* WARNING: chronyd is already starting

(the openrc-script answering)

# systemctl status chronyd
● chronyd.service - Chrony Network Time Service
Loaded: loaded (/usr/lib64/systemd/system/chronyd.service; enabled;
vendor preset: enabled)
Active: active (running) since Mo 2015-01-19 10:57:39 CET; 9min ago
Process: 761 ExecStart=/usr/sbin/chronyd (code=exited, status=0/SUCCESS)
Main PID: 764 (chronyd)
CGroup: /system.slice/chronyd.service
└─764 /usr/sbin/chronyd

But for another daemon:

# ansible -i inventories/oops_nodes.yml -l hiro.local -m service -a
"name=systemd-networkd state=started" all
hiro.local | success >> {
"changed": false,
"name": "systemd-networkd",
"state": "started"
}


I might file the bug at b.g.o. .. going upstream seems a bit early ;-)

Stefan
Stefan G. Weichinger
2015-01-21 21:03:12 UTC
Permalink
Am 19.01.2015 um 11:11 schrieb Stefan G. Weichinger:

> I might file the bug at b.g.o. .. going upstream seems a bit early ;-)

posted to their google-group and got pointed to the latest devel-branch
(equals to **9999 in our gentoo-world).

Works now, even in "mixed mode" (both systemd and openrc installed).

nice!

Stefan
Stefan G. Weichinger
2015-01-29 08:43:19 UTC
Permalink
sorry ... still a bit OT but maybe interesting for others as well:


Yesterday I started to modify the following ansible role to fit my needs
and work with gentoo target hosts:

https://github.com/debops/ansible-dhcpd

I modified tasks/main.yml (use portage ... install iproute2 as well) and
edited defaults/main.yml to reflect the environment of site A at first.


my first testing playbook:

---
- hosts: site-A-dhcpd
user: root
roles:
- ansible-dhcpd

Now I wonder how to use the same role for configuring site B.

defaults/main.yml currently contains the config (vars ... yes) for site
A ...

A copy of the role is way too redundant ...

What is the/a correct and elegant way to do that?

Have a defaults/site-B.conf.yml or something and include that in a 2nd
playbook?

Use some file in the vars/ directory ... ?


I am quite sure that this is just a beginner's problem ... but in these
days my brain is a bit exhausted by my current workload etc

Thanks for any hints, Stefan!
Tomas Mozes
2015-01-29 09:47:46 UTC
Permalink
On 2015-01-29 09:43, Stefan G. Weichinger wrote:
> sorry ... still a bit OT but maybe interesting for others as well:
>
>
> Yesterday I started to modify the following ansible role to fit my
> needs
> and work with gentoo target hosts:
>
> https://github.com/debops/ansible-dhcpd
>
> I modified tasks/main.yml (use portage ... install iproute2 as well)
> and
> edited defaults/main.yml to reflect the environment of site A at first.
>
>
> my first testing playbook:
>
> ---
> - hosts: site-A-dhcpd
> user: root
> roles:
> - ansible-dhcpd
>
> Now I wonder how to use the same role for configuring site B.
>
> defaults/main.yml currently contains the config (vars ... yes) for site
> A ...
>
> A copy of the role is way too redundant ...
>
> What is the/a correct and elegant way to do that?
>
> Have a defaults/site-B.conf.yml or something and include that in a 2nd
> playbook?
>
> Use some file in the vars/ directory ... ?
>
>
> I am quite sure that this is just a beginner's problem ... but in these
> days my brain is a bit exhausted by my current workload etc
>
> Thanks for any hints, Stefan!

Have your IPs listed in hosts-production.

For each site create a file, like:

site_A.yml
- hosts: site_A
roles:
- ...

site_B.yml
- hosts: site_B
roles:
- ...

Then create site.yml where you include site_A.yml and site_B.yml.
Mostly, you will not only use roles inclusion, but have something
special done on the server, so either you create a role corresponding to
this file (like role site_A, site_B) where you name the tasks or you put
it directly in the site_A.yml, site_B.yml file. This is the stuff unique
to the server, like creating a specific user, specific directory, with
specific files...

Then if you want to reconfigure all, just
ansible-playbook -i hosts-production site.yml

Only site_A:
ansible-playbook -i hosts-production site_A.yml

Only configure postfix on site_B:
ansible-playbook -i hosts-production site_B.yml --tags postfix -v

Read:
http://docs.ansible.com/playbooks_roles.html
http://docs.ansible.com/playbooks_best_practices.html
Stefan G. Weichinger
2015-01-29 10:14:04 UTC
Permalink
On 29.01.2015 10:47, Tomas Mozes wrote:

> Have your IPs listed in hosts-production.
>
> For each site create a file, like:
>
> site_A.yml
> - hosts: site_A
> roles:
> - ...
>
> site_B.yml
> - hosts: site_B
> roles:
> - ...
>
> Then create site.yml where you include site_A.yml and site_B.yml.
> Mostly, you will not only use roles inclusion, but have something
> special done on the server, so either you create a role corresponding to
> this file (like role site_A, site_B) where you name the tasks or you put
> it directly in the site_A.yml, site_B.yml file. This is the stuff unique
> to the server, like creating a specific user, specific directory, with
> specific files...
>
> Then if you want to reconfigure all, just
> ansible-playbook -i hosts-production site.yml
>
> Only site_A:
> ansible-playbook -i hosts-production site_A.yml
>
> Only configure postfix on site_B:
> ansible-playbook -i hosts-production site_B.yml --tags postfix -v
>
> Read:
> http://docs.ansible.com/playbooks_roles.html
> http://docs.ansible.com/playbooks_best_practices.html
>

Thanks, Tomas ... yes .... and no ... ;-)

I wonder if I could also:

cp defaults/main.yml to group_vars/site_[AB].yml ...

adjust the configs to the sites and then use something like:

# playbook 1

- hosts: site_A
roles:
- dhcpd

# playbook 2

- hosts: site_B
roles:
- dhcpd

.... would the group_vars override the vars defined in defaults/main.yml ?

I *think* so ... I will try that ...

Stefan
hydra
2015-01-29 10:31:10 UTC
Permalink
I haven't migrated to group_vars yet, so try and let us know ;)

On Thu, Jan 29, 2015 at 11:14 AM, Stefan G. Weichinger <***@xunil.at>
wrote:

> On 29.01.2015 10:47, Tomas Mozes wrote:
>
> > Have your IPs listed in hosts-production.
> >
> > For each site create a file, like:
> >
> > site_A.yml
> > - hosts: site_A
> > roles:
> > - ...
> >
> > site_B.yml
> > - hosts: site_B
> > roles:
> > - ...
> >
> > Then create site.yml where you include site_A.yml and site_B.yml.
> > Mostly, you will not only use roles inclusion, but have something
> > special done on the server, so either you create a role corresponding to
> > this file (like role site_A, site_B) where you name the tasks or you put
> > it directly in the site_A.yml, site_B.yml file. This is the stuff unique
> > to the server, like creating a specific user, specific directory, with
> > specific files...
> >
> > Then if you want to reconfigure all, just
> > ansible-playbook -i hosts-production site.yml
> >
> > Only site_A:
> > ansible-playbook -i hosts-production site_A.yml
> >
> > Only configure postfix on site_B:
> > ansible-playbook -i hosts-production site_B.yml --tags postfix -v
> >
> > Read:
> > http://docs.ansible.com/playbooks_roles.html
> > http://docs.ansible.com/playbooks_best_practices.html
> >
>
> Thanks, Tomas ... yes .... and no ... ;-)
>
> I wonder if I could also:
>
> cp defaults/main.yml to group_vars/site_[AB].yml ...
>
> adjust the configs to the sites and then use something like:
>
> # playbook 1
>
> - hosts: site_A
> roles:
> - dhcpd
>
> # playbook 2
>
> - hosts: site_B
> roles:
> - dhcpd
>
> .... would the group_vars override the vars defined in defaults/main.yml ?
>
> I *think* so ... I will try that ...
>
> Stefan
>
>
Stefan G. Weichinger
2015-01-30 16:01:47 UTC
Permalink
On 29.01.2015 11:31, hydra wrote:
> I haven't migrated to group_vars yet, so try and let us know ;)

It took me a bit of fiddling but I think I figured it out.

I had to get the directory structure correct ... now I have

/etc/ansible/inventories/group_vars/

with files like siteA, siteB, siteC ... containing the specific variables.

At first I always had /etc/ansible/group_vars ... and that didn't work
at all!

Now I am able to have such a small playbook for the whole dhcp-config of
one site:

---
- hosts: siteA
user: root
roles:
- dhcpd

and this pulls the group_vars from

/etc/ansible/inventories/group_vars/siteA

and applies it to the dhcpd-role
and overrides /etc/ansible/roles/dhcpd/defaults/main.yml ... which was
my original goal!

nice!

Stefan
hydra
2015-01-30 18:32:03 UTC
Permalink
On Fri, Jan 30, 2015 at 5:01 PM, Stefan G. Weichinger <***@xunil.at>
wrote:

> On 29.01.2015 11:31, hydra wrote:
> > I haven't migrated to group_vars yet, so try and let us know ;)
>
> It took me a bit of fiddling but I think I figured it out.
>
> I had to get the directory structure correct ... now I have
>
> /etc/ansible/inventories/group_vars/
>
> with files like siteA, siteB, siteC ... containing the specific variables.
>
> At first I always had /etc/ansible/group_vars ... and that didn't work
> at all!
>
> Now I am able to have such a small playbook for the whole dhcp-config of
> one site:
>
> ---
> - hosts: siteA
> user: root
> roles:
> - dhcpd
>
> and this pulls the group_vars from
>
> /etc/ansible/inventories/group_vars/siteA
>
> and applies it to the dhcpd-role
> and overrides /etc/ansible/roles/dhcpd/defaults/main.yml ... which was
> my original goal!
>
> nice!
>
> Stefan
>
>
>
By the way, you don't need to have it in /etc/ansible, feel free to have it
anywhere.
Stefan G. Weichinger
2015-01-30 21:18:08 UTC
Permalink
Am 30.01.2015 um 19:32 schrieb hydra:

> By the way, you don't need to have it in /etc/ansible, feel free to have it
> anywhere.

Thanks for the reminder ... I know already ;)
Alan McKinnon
2015-01-11 16:23:41 UTC
Permalink
On 11/01/2015 14:25, Rich Freeman wrote:
> On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon <***@gmail.com> wrote:
>> The reason I'm recommending to keep all of /etc in it's own repo is that
>> it's the simplest way to do it. /etc/ is a large mixture of
>> ansible-controlled files, sysadmin-controlled files, and other arbitrary
>> files installed by the package manager. It's also not very big, around
>> 10M or so typically. So you *could* manually add to a repo every file
>> you change manually, but that is error-prone and easy to forget. Simpler
>> to just commit everything in /etc which gives you an independant record
>> of all changes over time. Have you ever dealt with a compliance auditor?
>> An independant change record that is separate from the CM itself is a
>> feature that those fellows really like a lot.
>
> If you're taking care of individual long-lived hosts this probably
> isn't a bad idea.

Yes, this is what I do.

I do have cattle, not pets. But my cattle are long-production dairy
cows, not beef steers for slaughter. And I have a stud bull or two :-)

> If you just build a new host anytime you do updates and destroy the
> old one then obviously a git repo in /etc won't get you far.


--
Alan McKinnon
***@gmail.com
Tomas Mozes
2015-01-11 17:41:21 UTC
Permalink
On 2015-01-11 09:22, Alan McKinnon wrote:
> On 11/01/2015 09:46, Tomas Mozes wrote:
>> On 2015-01-10 23:11, Alan McKinnon wrote:
>>> On 10/01/2015 21:40, Tomas Mozes wrote:
>>>
>>>
>>>> Ansible is a not a backup solution. You don't need to download your
>>>> /etc
>>>> from the machines because you deploy your /etc to machines via
>>>> ansible.
>>>>
>>>> I was also thinking about putting /etc in git and then deploying it
>>>> but:
>>>> - on updates, will you update all configurations in all /etc repos?
>>>> - do you really want to keep all the information in git, is it
>>>> necessary?
>>>
>>> The set of fileS in /etc/ managed by ansible is always a strict
>>> subset
>>> of everything in /etc
>>>
>>> For that reason alone, it's a good idea to back up /etc anyway,
>>> regardless of having a CM system in place. The smallest benefit is
>>> knowing when things changed, by the cm SYSTEM or otherwise
>>
>> For what reason?
>
> For the simple reason that ansible is not the only system that can make
> changes in /etc
>
>> And how does a workflow look like then? You commit changes to your git
>> repo of ansible. Then you deploy via ansible and check the /etc of
>> each
>> machine and commit a message that you changed something via ansible?
>
>
> When you commit to the ansible repo, you are committing and tracking
> changes to the *ansible* config. You are not tracking changes to /etc
> on
> the actual destination host, that is a separate problem altogether and
> not directly related to the fact that ansible logs in and does various
> s
>
> You can make your workflow whatever makes sense to you.
>
> The reason I'm recommending to keep all of /etc in it's own repo is
> that
> it's the simplest way to do it. /etc/ is a large mixture of
> ansible-controlled files, sysadmin-controlled files, and other
> arbitrary
> files installed by the package manager. It's also not very big, around
> 10M or so typically. So you *could* manually add to a repo every file
> you change manually, but that is error-prone and easy to forget.
> Simpler
> to just commit everything in /etc which gives you an independant record
> of all changes over time. Have you ever dealt with a compliance
> auditor?
> An independant change record that is separate from the CM itself is a
> feature that those fellows really like a lot.

Out of curiosity, "ansible-controlled files, sysadmin-controlled files"
means that something is managed via ansible and something is done
manually?

And then, /etc is not the only directory with changing files, what about
other directories?

Regarding the workflow with /etc in git vs ansible in git I was asking
about your concrete workflow so we can learn from it and maybe apply
some good practices on our servers as well.
Alan McKinnon
2015-01-11 21:06:03 UTC
Permalink
On 11/01/2015 19:41, Tomas Mozes wrote:
> On 2015-01-11 09:22, Alan McKinnon wrote:
>> On 11/01/2015 09:46, Tomas Mozes wrote:
>>> On 2015-01-10 23:11, Alan McKinnon wrote:
>>>> On 10/01/2015 21:40, Tomas Mozes wrote:
>>>>
>>>>
>>>>> Ansible is a not a backup solution. You don't need to download your
>>>>> /etc
>>>>> from the machines because you deploy your /etc to machines via
>>>>> ansible.
>>>>>
>>>>> I was also thinking about putting /etc in git and then deploying it
>>>>> but:
>>>>> - on updates, will you update all configurations in all /etc repos?
>>>>> - do you really want to keep all the information in git, is it
>>>>> necessary?
>>>>
>>>> The set of fileS in /etc/ managed by ansible is always a strict subset
>>>> of everything in /etc
>>>>
>>>> For that reason alone, it's a good idea to back up /etc anyway,
>>>> regardless of having a CM system in place. The smallest benefit is
>>>> knowing when things changed, by the cm SYSTEM or otherwise
>>>
>>> For what reason?
>>
>> For the simple reason that ansible is not the only system that can make
>> changes in /etc
>>
>>> And how does a workflow look like then? You commit changes to your git
>>> repo of ansible. Then you deploy via ansible and check the /etc of each
>>> machine and commit a message that you changed something via ansible?
>>
>>
>> When you commit to the ansible repo, you are committing and tracking
>> changes to the *ansible* config. You are not tracking changes to /etc on
>> the actual destination host, that is a separate problem altogether and
>> not directly related to the fact that ansible logs in and does various
>> s
>>
>> You can make your workflow whatever makes sense to you.
>>
>> The reason I'm recommending to keep all of /etc in it's own repo is that
>> it's the simplest way to do it. /etc/ is a large mixture of
>> ansible-controlled files, sysadmin-controlled files, and other arbitrary
>> files installed by the package manager. It's also not very big, around
>> 10M or so typically. So you *could* manually add to a repo every file
>> you change manually, but that is error-prone and easy to forget. Simpler
>> to just commit everything in /etc which gives you an independant record
>> of all changes over time. Have you ever dealt with a compliance auditor?
>> An independant change record that is separate from the CM itself is a
>> feature that those fellows really like a lot.
>
> Out of curiosity, "ansible-controlled files, sysadmin-controlled files"
> means that something is managed via ansible and something is done manually?


Yes


> And then, /etc is not the only directory with changing files, what about
> other directories?

Do with them whatever you want, just like /etc

/etc is the canonical example of something you might want to track in
git, as a) it changes and b) it's hard to recreate.

Maybe you have other directories and locations you feel the same about,
so if you think they need tracking in git by all means go ahead and
track them. It's your choice after all, you can do with your servers
whatever you wish



> Regarding the workflow with /etc in git vs ansible in git I was asking
> about your concrete workflow so we can learn from it and maybe apply
> some good practices on our servers as well.


There isn't any workflow.

Ansible does it's thing and sometimes changes stuff.
Changes get committed to a repo however and whenever works best for you.
Maybe it's a regular cron job, maybe it's something you remember to do
every time you quit vi, maybe it's an ansible handler that runs at the
end of every play.

It will be almost impossible to give advice to someone else on this.


--
Alan McKinnon
***@gmail.com
Tomas Mozes
2015-01-12 07:46:31 UTC
Permalink
On 2015-01-11 22:06, Alan McKinnon wrote:
>> Out of curiosity, "ansible-controlled files, sysadmin-controlled
>> files"
>> means that something is managed via ansible and something is done
>> manually?
>
>
> Yes

Then it's clear why /etc is in git. Ideally one would not make manual
changes to systems managed via ansible.
Stefan G. Weichinger
2015-01-12 11:02:12 UTC
Permalink
On 12.01.2015 08:46, Tomas Mozes wrote:
> On 2015-01-11 22:06, Alan McKinnon wrote:
>>> Out of curiosity, "ansible-controlled files, sysadmin-controlled files"
>>> means that something is managed via ansible and something is done
>>> manually?
>>
>>
>> Yes
>
> Then it's clear why /etc is in git. Ideally one would not make manual
> changes to systems managed via ansible.

I think that it is a clear advantage that it is *possible* to change
files in /etc (or other places) either manually or via rules/playbooks
from ansible. Fits my "workflow" better.

I will take the synced /etc-s as templates to define some roles and
start applying them to test-machines step by step ... this way the
existing configs get migrated to rules/roles/playbooks slowly.

Stefan
Loading...