Posts Tagged Under: Puppet

Puppet – Identifying dead puppet code using puppet ghostbuster

This is a how-to guide on using:

First get this working so that you can access this gui dashboard:

Install rvm:

$ gpg --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
$ \curl -sSL | bash -s stable --ruby

Check rvm install is successful:

schowdhury@Shers-MacBook-Pro:~$ rvm --version
rvm 1.29.3 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin []
schowdhury@Shers-MacBook-Pro:~$ rvm list

rvm rubies

=* ruby-2.4.1 [ x86_64 ]

# => - current
# =* - current && default
# * - default

schowdhury@Shers-MacBook-Pro:~$ ruby --version
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-darwin16]

Next install the following gems:

gem install puppet-ghostbuster
gem install r10k

Then clone your repo onto your laptop, and while in this repo, switch to the appropriate branch, then run:

$ r10k puppetfile install --verbose

then in a seperate bash terminal tab, run:

$ ssh -L 8080:localhost:8080 {puppetmaster-fqdn}

Now test this ssh tunnel by opening up a web browser and went to:


Puppet – External Facts

External facts is a great way to attach (arbitrary) metadata to a machine during the the launch of a new machine. E.g. when building a Centos 7 aws ec2 instance, you can generate the external facts via userdata.

Puppet can use these external facts in the way as any other fact. You can create external facts by simply creating a file. This file can be a:

  • shell script
  • yaml file

This file needs to be created inside any of the following 2 folders on a CentOS machine:

  • /opt/puppetlabs/facter/facts.d/
  • /etc/facter/facts.d

Note: you might need to create the above folders if they don’t already exist.

For example, let’s say I want to create 3 external facts, the ‘key-name=key-values’ are:


First off, let’s confirm that these facts don’t already exist:

[root@puppetagent1 ~]# facter -p role

[root@puppetagent1 ~]# facter -p pipeline_color


Puppet – Identifying redundant Puppet code using PuppetDB API

You can query the PuppetDB API to identify which puppet environments and classes that are no longer being used. This is handy for housekeeping purposes.

The commands covered in this tutorial needs to be run as the root user on the puppetmaster.

Note: I am using version 4 of the puppetdb api. So you might need to tweak these commands in order for them to work on other versions.

Also note, there is a really cool rubygem for identifying dead puppet code:

Identifying obselete control repo environments

The following command lists all the control repo environments that has one or more puppet agents attached to them:

$ curl -s -X GET http://localhost:8080/pdb/query/v4/nodes --data-urlencode 'pretty=true' | grep 'report_environment' | sort | uniq | awk '{print $NF}' | cut -d'"' -f2


puppet performance tuning

The latest version of PE 2016.4 has the capability to monitor the heap memory as a feature of puppet server 2.6.

this guide:

Thundering herd test:

After you’ve added hundreds of nodes to your deployment you may notice that your agents are running slow or timing out. When hundreds of nodes check in simultaneously to request a catalog, it might cause a so-called thundering herd of processes that causes CPU and memory performance to suffer. To verify that you have a thundering herd condition, you can run a query on the PuppetDB node (the master in a monolithic installation) to show how many nodes check in per minute.

Log into the PuppetDB node(the master in a monolithic installation) as the pe-postgres user.

Open the PostgreSQL command line interface by running sudo su – pe-postgres -s

Puppet – Using AWS web console as Puppet’s external node classifier (ENC)

This is a script I wrote that queries the ec2 tags of an aws console, in order to figure out what environment a node belongs to, and what class to assign to it.



export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxx
export AWS_DEFAULT_REGION=xxxxxxx

instanceid=`echo $1 | awk -F"_" '{print $NF}'` # $1 will match the certname

env=`/bin/aws --output text ec2 describe-instances --instance-ids $instanceid | grep '^TAGS' | grep 'env' |   awk '{print $NF}'`
role=`/bin/aws --output text ec2 describe-instances --instance-ids $instanceid | grep '^TAGS' | grep 'role' |   awk '{print $NF}'`

#aws ec2 describe-instances --instance-ids $instanceid > /tmp/enc-log.txt
#echo "puppet run occured at `date`" > /tmp/enc-log.txt
#echo "The first param value is: $1" >> /tmp/enc-log.txt
#echo "The environment is: $env" >> /tmp/enc-log.txt
#echo "The puppet role is: $role" >> /tmp/enc-log.txt

echo '---'
echo 'classes:'
echo "   - roles::$role"
echo "environment: $env"   
echo "parameters:"

Puppet – querying puppetdb with postgres command line (psql)

Exported Resources

querying puppetdb with postgres command line, psql

psql -h localhost -U puppetdb puppetdb

However to connect to puppet enterprise’s puppetdb, then follow:


select * FROM catalog_resources
SELECT * FROM catalog_resourcesSELECT * FROM factsselect * from factsssselect * from reports
SELECT * FROM catalog_resourcesSELECT * FROM factsselect * from factsssselect * from reports
select * from facts;
lsSELECT * FROM catalog_resources
\c puppetdb
lsSELECT * FROM catalog_resources;
lsSELECT * FROM catalog_resources
\c puppetdb
SELECT * FROM certnames ;
SELECT * FROM facts ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM environments ;
SELECT * FROM catalog_resources ;

Puppet – The Puppet Narrative (learning technique)

The fastest way to learn Puppet is to get a full picture of the puppet infrastrucutre. The best way to do to this is if you have a story (aka narrative) to follow along that starts from the puppet basics to the advanced stuff. When you learn something new about puppet you can then link it back to the narrative. This will help you understand the bigger picture of using Puppet.

Learning approach checklist

  1. Run a Puppet resource command, with a resource definition itself on the command line. Run this on all demo machines. E.g.:
    [root@puppetagent1 ~]# puppet resource file /tmp/testfile.txt ensure=present content="hello world"
    Notice: /File[/tmp/testfile.txt]/ensure: created
    file { '/tmp/testfile.txt':
      ensure  => 'file',
      content => '{md5}5eb63bbbe01eeed093cb22bb8f5acdc3',
    [root@puppetagent1 ~]# cat /tmp/testfile.txt
    hello world
    [root@puppetagent1 ~]#

    This demonstrate how puppet is standardising how changes are

Tip: avoid redownloading after each rspec run

the rake’s spec command does is comprised of the following:

spec = spec_prep + spec_standalone + spec_clean

So we need to avoid running spec_clean. To do this simply run:

$ bundle exec rake spec_prep

After that, always run

$ bundle exec rake spec_standalone

Instead of the “spec” command