Posts Tagged Under: Puppetlabs

Puppet – Identifying dead puppet code using puppet ghostbuster

This is a how-to guide on using:

First get this working so that you can access this gui dashboard:

Install rvm:

$ gpg --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
$ \curl -sSL | bash -s stable --ruby

Check rvm install is successful:

schowdhury@Shers-MacBook-Pro:~$ rvm --version
rvm 1.29.3 (latest) by Michal Papis, Piotr Kuczynski, Wayne E. Seguin []
schowdhury@Shers-MacBook-Pro:~$ rvm list

rvm rubies

=* ruby-2.4.1 [ x86_64 ]

# => - current
# =* - current && default
# * - default

schowdhury@Shers-MacBook-Pro:~$ ruby --version
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-darwin16]

Next install the following gems:

gem install puppet-ghostbuster
gem install r10k

Then clone your repo onto your laptop, and while in this repo, switch to the appropriate branch, then run:

$ r10k puppetfile install --verbose

then in a seperate bash terminal tab, run:

$ ssh -L 8080:localhost:8080 {puppetmaster-fqdn}

Now test this ssh tunnel by opening up a web browser and went to:


Puppet – External Facts

External facts is a great way to attach (arbitrary) metadata to a machine during the the launch of a new machine. E.g. when building a Centos 7 aws ec2 instance, you can generate the external facts via userdata.

Puppet can use these external facts in the way as any other fact. You can create external facts by simply creating a file. This file can be a:

  • shell script
  • yaml file

This file needs to be created inside any of the following 2 folders on a CentOS machine:

  • /opt/puppetlabs/facter/facts.d/
  • /etc/facter/facts.d

Note: you might need to create the above folders if they don’t already exist.

For example, let’s say I want to create 3 external facts, the ‘key-name=key-values’ are:


First off, let’s confirm that these facts don’t already exist:

[root@puppetagent1 ~]# facter -p role

[root@puppetagent1 ~]# facter -p pipeline_color


Puppet – Identifying redundant Puppet code using PuppetDB API

You can query the PuppetDB API to identify which puppet environments and classes that are no longer being used. This is handy for housekeeping purposes.

The commands covered in this tutorial needs to be run as the root user on the puppetmaster.

Note: I am using version 4 of the puppetdb api. So you might need to tweak these commands in order for them to work on other versions.

Also note, there is a really cool rubygem for identifying dead puppet code:

Identifying obselete control repo environments

The following command lists all the control repo environments that has one or more puppet agents attached to them:

$ curl -s -X GET http://localhost:8080/pdb/query/v4/nodes --data-urlencode 'pretty=true' | grep 'report_environment' | sort | uniq | awk '{print $NF}' | cut -d'"' -f2


Puppet – querying puppetdb with postgres command line (psql)

Exported Resources

querying puppetdb with postgres command line, psql

psql -h localhost -U puppetdb puppetdb

However to connect to puppet enterprise’s puppetdb, then follow:


select * FROM catalog_resources
SELECT * FROM catalog_resourcesSELECT * FROM factsselect * from factsssselect * from reports
SELECT * FROM catalog_resourcesSELECT * FROM factsselect * from factsssselect * from reports
select * from facts;
lsSELECT * FROM catalog_resources
\c puppetdb
lsSELECT * FROM catalog_resources;
lsSELECT * FROM catalog_resources
\c puppetdb
SELECT * FROM certnames ;
SELECT * FROM facts ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM catalog_resources ;
SELECT * FROM environments ;
SELECT * FROM catalog_resources ;

Puppet – The Puppet Narrative (learning technique)

The fastest way to learn Puppet is to get a full picture of the puppet infrastrucutre. The best way to do to this is if you have a story (aka narrative) to follow along that starts from the puppet basics to the advanced stuff. When you learn something new about puppet you can then link it back to the narrative. This will help you understand the bigger picture of using Puppet.

Learning approach checklist

  1. Run a Puppet resource command, with a resource definition itself on the command line. Run this on all demo machines. E.g.:
    [root@puppetagent1 ~]# puppet resource file /tmp/testfile.txt ensure=present content="hello world"
    Notice: /File[/tmp/testfile.txt]/ensure: created
    file { '/tmp/testfile.txt':
      ensure  => 'file',
      content => '{md5}5eb63bbbe01eeed093cb22bb8f5acdc3',
    [root@puppetagent1 ~]# cat /tmp/testfile.txt
    hello world
    [root@puppetagent1 ~]#

    This demonstrate how puppet is standardising how changes are

Tip: avoid redownloading after each rspec run

the rake’s spec command does is comprised of the following:

spec = spec_prep + spec_standalone + spec_clean

So we need to avoid running spec_clean. To do this simply run:

$ bundle exec rake spec_prep

After that, always run

$ bundle exec rake spec_standalone

Instead of the “spec” command

Puppet – r10k and the Puppetfile

The “Puppetfile” is list of all puppet modules you want downloaded into your puppet master’s modules folder:

The first line of this file should be:

forge ""

After that any puppet forge modules are listed as:

mod 'puppetlabs/stdlib',  '4.5.1'

Next any puppet modules hosted on git (e.g. github or any other git server, e.g. stash) can also be added, based on git’s commit id:

mod 'sudo',
  :git => '',
  :ref => '231e15fb9311233ee0fe12f4d9bd6ec978d54a2c'

or a particular, irrespective of branch:

mod 'sudo',
  :git => '',
  :ref => '231e15fb9311233ee0fe12f4d9bd6ec978d54a2c'

or a particular branche’s latest commit:

mod 'sudo',
  :git => '',
  :ref => '{branch-name}'

Or even a particular git (version) tag:

mod 'sudo',
  :git => '',
  :tag => '1.2.0'

Note, ensure you include all module dependencies. Then to check that you all dependencies are included, run:

$ puppet module															

Puppet – Ordering your classes

this is common, for example, sometimes once class installs dbg software, and then the next class creates the db istself.

class mainclass {
  include class1
  include class2
  include class3
  include class4

  -> Class['class2']
  -> Class['class3']
  -> Class['class4']