question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I'm not able to find many examples of what a .dockerignore file should look like. Using puppet to install a few packages on a docker container causes the image to explode from 600MB to 3GB. I'm trying to use a .dockerignore file to keep the size to a minumum $ cat Dockerfile FROM centos:centos6 #Work around selinux problem on cent images RUN yum install -y --enablerepo=centosplus libselinux-devel RUN yum install -y wget git tar openssh-server; yum -y clean all Add Puppetfile / RUN librarian-puppet install RUN puppet apply --modulepath=/modules -e "class { 'buildslave': jenkins_slave => true,}" RUN librarian-puppet clean If I run docker images --tree I can see that the image instantlly grows by several GB $ docker images --tree β”œβ”€e289570b5555 Virtual Size: 387.7 MB β”‚ └─a7646acf90d0 Virtual Size: 442.5 MB β”‚ └─d7bc6e1fbe43 Virtual Size: 442.5 MB β”‚ └─772e6b204e3b Virtual Size: 627.5 MB β”‚ └─599a7b5226f4 Virtual Size: 627.5 MB β”‚ └─9fbffccda8bd Virtual Size: 2.943 GB β”‚ └─ee46af013f6b Virtual Size: 2.943 GB β”‚ └─3e4fe065fd07 Virtual Size: 2.943 GB β”‚ └─de9ec3eba39e Virtual Size: 2.943 GB β”‚ └─31cba2716a12 Virtual Size: 2.943 GB β”‚ └─52cbc742d3c4 Virtual Size: 2.943 GB β”‚ └─9a857380258c Virtual Size: 2.943 GB β”‚ └─c6d87a343807 Virtual Size: 2.964 GB β”‚ └─f664124e0080 Virtual Size: 2.964 GB β”‚ └─e6cc212038b9 Virtual Size: 2.964 GB Tags: foo/jenkins-centos6-buildslave:latest I believe the reason that the image grows so large, is because librarian-puppet clones a puppet module to /modules which breaks the build cache I've tried the following .dockerignore files with no luck. $ cat .dockerignore /modules /modules/ /modules/* Is this the correct syntax for a .dockerignore file? Are there any other ways to prevent these containers from growing so large? Additional information: http://kartar.net/2013/12/building-puppet-apps-inside-docker/ http://danielmartins.ninja/posts/a-week-of-docker.html
The .dockerignore file is similar to the .gitignore syntax. Here are some example rules: # Ignore a file or directory in the context root named "modules" modules # Ignore any files or directories within the subdirectory named "modules" # in the context root modules/* # Ignore any files or directories in the context root beginning with "modules" modules* # Ignore any files or directories one level down from the context root named # "modules" */modules # Ignore any files or directories at any level, including the context root, # named modules **/modules # Ignore every file in the entire build context (see next rule for how this # could be used) * # Re-include the file or directory named "src" that may have been previously # excluded. Note that you cannot re-include files in subdirectories that have # been previously excluded at a higher level !src Note that "build context" is the directory you pass at the end of your build command, typically a . to indicate the current directory. This directory is packaged from the docker client, excluding any files you have ignored with .dockerignore, and sent to the docker daemon to perform the build. Even when the daemon is on the same host as your client, the build only works from this context and not directly from the folders. There is only a single .dockerignore for a build, and it must be in the root of the build context. It will not work if it is in your home directory (assuming you build from a subdirectory), and it will not work from a subdirectory of your build context. To test what is in your current build context and verify your .dockerignore file is behaving correctly, you can copy/paste the following (this assumes you do not have an image named test-context, it will be overwritten and then deleted if you do): # create an image that includes the entire build context docker build -t test-context -f - . <<EOF FROM busybox COPY . /context WORKDIR /context CMD find . EOF # run the image which executes the find command docker container run --rm test-context # cleanup the built image docker image rm test-context
Puppet
25,490,911
89
We're planning to use AMI EC2 instances which are not "pre-baked". I.e. when they are spun up, they are bare installs of AWS linux. Our bootstrap process will pull in the various installs that we need e.g. python, tomcat. We'll have min of 3 instances and max of 8. Given these requirements, would using Puppet/Chef be useful rather than using Amazon Cloud Formation (CloudInit)? Best I can see is if we used Puppet, then we'd have declarative programming which is easier to audit to see what's happening versus a script. Also CloudInit has a 16k script size limit which we may or may not run into. Has anyone moved from CloudInit to Puppet or Chef for a specific reason that they can provide here in answer to my question?
Is there an advantage over CloudInit? Yes, absolutely, many of them! Sure, you can write top to bottom run once CloudInit scripts to provision a server. But what happens when you need to change a configuration file, add a user, update a package, or install a new package? You will end up logging into servers or writing scripts to do so, and inevitably an incongruous state of servers. CloudInit is not configuration management. If you opt to begin using configuration management software, use cloud init for just one task: to bootsrap the Puppet/Chef/other agent. Puppet doesn't just help you automate installing packages, setup ssh keys, or tune your Tomcat heap. It ensures the state of things. When a developer is troubleshooting a Java app at 3am and changes your Tomcat config, Puppet will change it back. You can rapidly change the version of Python for all or groups of nodes, and if someone installs a different version, Puppet will change it back. When your application stack changes and you start using, say RabbitMQ, or Jetty, or a new RDBMS, you can easily test and deploy the changes across tens or thousands of servers. There are many other reasons to use configuration management software such as back end reporting, auditing, and security compliance.
Puppet
11,995,290
87
Given the option between using GPG and OpenSSL for local encryption before pushing archives to an off-site backup location, what are the benefits and drawbacks of each solution? Background: I currently manage a server infrastructure based on Ubuntu 14.04.1 with all current patches applied as they become available. All of these systems are headless, automatically built using vetted preseeds and automation tools, and run in virtual machines via KVM on uniform Intel-based hardware. We have a preference for Ruby, but a stronger preference for "doing things correctly". Because of both, we've chosen the "backup" gem as the means to create encrypted archives of data we want preserved, since it will create the same encrypted archives for a developer using Vagrant that it would in production, regardless of the mechanism by which it's transmitted. All software and configuration is managed via Puppet, so neither decision will have any impact on "user experience" or convenience. Either option will create relevant scripts to manage, verify, or restore from any backups created. Given that, does either encryption option offer any advantage against the other when used for this purpose?
I would pick GPG for file encryption, it's got decades of secure tested encryption, and is very easy to have multiple "recipients" (backup keys?) or signatures with it's public keys & even servers (if they would be useful). With GPG, all the simple mistakes have been avoided/fixed, it picks a longer "random" key for the actual encryption and does a good number of "rounds" to make it very secure. OpenSSL should be able to do all the same things, (it's been around since 1998, but if version numbers mean anything it reached version 1 in 2010) but it's very easy to make a mistake that could drastically lower the security. And from this post on security.stackexchange.com (from Jan 2013) and another by a 159K reputation user, the openssl enc command might leave something to be desired: The encryption format used by OpenSSL is non-standard: it is "what OpenSSL does", and if all versions of OpenSSL tend to agree with each other, there is still no reference document which describes this format except OpenSSL source code. The header format is rather simple: magic value (8 bytes): the bytes 53 61 6c 74 65 64 5f 5f salt value (8 bytes) Hence a fixed 16-byte header, beginning with the ASCII encoding of the string "Salted__", followed by the salt itself. That's all ! No indication of the encryption algorithm; you are supposed to keep track of that yourself. The process by which the password and salt are turned into the key and IV is not documented, but a look at the source code shows that it calls the OpenSSL-specific EVP_BytesToKey() function, which uses a custom key derivation function with some repeated hashing. This is a non-standard and not-well vetted construct (!) which relies on the MD5 hash function of dubious reputation (!!); that function can be changed on the command-line with the undocumented -md flag (!!!); the "iteration count" is set by the enc command to 1 and cannot be changed (!!!!). This means that the first 16 bytes of the key will be equal to MD5(password||salt), and that's it. This is quite weak ! Anybody who knows how to write code on a PC can try to crack such a scheme and will be able to "try" several dozens of millions of potential passwords per second (hundreds of millions will be achievable with a GPU). If you use "openssl enc", make sure your password has very high entropy ! (i.e. higher than usually recommended; aim for 80 bits, at least). Or, preferably, don't use it at all; instead, go for something more robust (GnuPG, when doing symmetric encryption for a password, uses a stronger KDF with many iterations of the underlying hash function). man enc even has this under "BUGS": There should be an option to allow an iteration count to be included.
Puppet
28,247,821
78
In the dark ages, my usual setup for development of LAMP web applications was to test locally on my machine. PHP (in my case), the database and the web server were all installed natively. The server was set up with standard installs of Apache and MySQL, and I had multiple virtual hosts for different parts of the web application. When I was happy with the results I had on my local machine, I'd logged into the server and did git pull in the staging environment. Assuming everything was working as well on the server as it was on my machine, I'd do the same thing for production. New beginnings… So now I'm starting a brand new web application from scratch, and I want to do it "the proper way". I've read up about Docker, Vagrant and Puppet (and Chef, although I personally prefer Puppet's system of dependencies rather than Chef's iterative process). Despite all the research I've done, there still seem to be several questions I can't seem to find answers for: Should there be separate Docker containers for the web server (such as Apache), the database server (such as MySQL) and each part of the web application? When I talk about parts of the web application, I mean things like mysite.com, controlpanel.mysite.com, etc. These "parts" will share the same database. Since Docker seems to provide ready-made containers for things like the web and database servers, it seems like those things at least should be in separate containers. Should the different parts of my web application be in separate containers, too? Docker containers seem to be designed to be replaceable rather than me having to update the software inside them. What about the data they write that I don't want to lose? The database server will manage files related to the content of my database (that I'll want to be backing up). The web server will be creating logs, and my web applications will be managing various files and caches, etc. All these files need to be written outside of the application's containers (because I might replace them when updating?), so where do they go? Straight into the host machine's file system? Or into a separate "Docker Volume"? If they go into Docker volumes, should I use a separate volume for the database, web server, application, etc.? Can I still easily access the contents using SFTP from my local machine like I do now? I don't want to lose any convenience here! Is it a good idea to use Puppet to create and manage the Docker containers, both for the development server and production server? It seems Puppet has support for managing Docker containers directly, so this seems like a reasonably good way of easily setting up a server or the production environment (using Vagrant) from scratch. Hopefully I've asked some relevant questions; it would be great to get some proper "best practices" for development and production of LAMP-like web applications. it's just there doesn't seem to be much that I've found!
Should there be separate Docker containers for the web server (such as Apache), the database server (such as MySQL) and each part of the web application? There is no correct answer to that question. If you will be using Docker in production, then try to run your Docker containers in your development environment as they will be in production. Else just use the Docker containers the easiest way you can. Docker Hub provides ready to go containers for PHP, databases, etc. and it is easy to use them. On the other hand, you have to link them together to allow them to interact. For a development environment and if you use multiple containers, I would advise to use docker-compose. Another path is to build a Docker image that is the closest to your production machine (assuming you have only one machine) which would run the database, the web server, and PHP. A container from such an image would have to run multiple processes. This can be achieved in different ways. Take a look at supervisor or phusion/baseimage. When I talk about parts of the web application, I mean things like mysite.com, controlpanel.mysite.com, etc. You could have them separated. If those applications need to share sessions, make sure sessions are stored in database or on a docker volume that is accessible to all. Docker containers seem to be designed to be replaceable rather than me having to update the software inside them. What about the data they write that I don't want to lose? Docker has a thing called volume to allow data to be written on a filesystem out of the container. There are different ways to work with volumes: you can mount a directory from the Docker host to a container volume, or you can have data volume containers, or named volumes. Docker volumes are an important concept and it is worthwhile to take the time to master them. If you want to easily access the data used by your containers from your Docker host, mounting a directory on the Docker host is the way to go. Although it may be tricky regarding permissions and ownership of the files. Regarding backups, take a look at the Docker user guide where everything you need to know in regards with volumes is detailed. Is it a good idea to use Puppet to create and manage the Docker containers, both for the development server and production server? The best practice is to operate on your development environment the same way you will operate on your production environment. There is no point going through setting up Puppet correctly for your development environment if all that work won't be used for the production environment. Having a Vagrantfile that provision a VM with Docker is really easy with just shell provisioning ; IMHO puppet/chef/... are overkill. You are asking the right questions, but there isn't any answer that fits all situations. In my view there are two ways to do things: make your development environment replicate exactly your production environment make your development environment different from production, keeping it as simple and straightforward as you can so developers won't feel the friction induced by using new tools
Puppet
26,139,472
71
Here is my naive approach: # puppet/init.pp $x = 'hello ' + 'goodbye' This does not work. How does one concatenate strings in Puppet?
Keyword variable interpolation: $value = "${one}${two}" Source: http://docs.puppetlabs.com/puppet/4.3/reference/lang_variables.html#interpolation Note that although it might work without the curly braces, you should always use them.
Puppet
14,885,263
57
I'm trying to create puppet module which automates installation of zend server CE, this is not important here, but steps are as following update /etc/apt/source.list download repos key via wget do apt-get update do apt-get install zend-server-ce-5.2 I have init.pp file class zendserverce { # https://github.com/puppetlabs/puppetlabs-stdlib file_line { 'debian_package': path => '/etc/apt/sources.list', line => 'deb http://repos.zend.com/zend-server/deb server non-free' } exec { "wget http://repos.zend.com/zend.key -O- |apt-key add -": path => ["/usr/bin", "/usr/sbin"] } exec { "apt-get update": command => "/usr/bin/apt-get update", onlyif => "/bin/sh -c '[ ! -f /var/cache/apt/pkgcache.bin ] || /usr/bin/find /etc/apt/* -cnewer /var/cache/apt/pkgcache.bin | /bin/grep . > /dev/null'", } package { "zend-server-ce-php-5.2": ensure => "latest" } } Seems that puppet runs commands in different order then I need. Is there any way how to for tell him to run in my desired order? The output of such snippet is [0;36mnotice: /Stage[main]/Mc/Package[mc]/ensure: ensure changed 'purged' to 'latest'[0m [1;35merr: /Stage[main]/Zendserverce/Package[zend-server-ce-php-5.2]/ensure: change from purged to latest failed: Could not update: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install zend-server-ce-php-5.2' returned 100: Reading package lists... Building dependency tree... Reading state information... E: Couldn't find package zend-server-ce-php-5.2 at /tmp/vagrant-puppet/modules 0/zendserverce/manifests/init.pp:28[0m [0;36mnotice: /Stage[main]/Zendserverce/Exec[wget http://repos.zend.com/zend.key -O- |apt-key add -]/returns: executed successfully[0m [0;36mnotice: /Stage[main]/Zendserverce/File_line[debian_package]/ensure: created[0m [0;36mnotice: Finished catalog run in 6.75 seconds[0m So it says: Couldn't find package zend-server-ce-php-5.2 Can anyone guide me what is wrong?
Since Puppet 2.6.0 a new feature "relationship syntax" was introduced. An example in Puppet 2.6.0 and above would look like this: exec { "apt-update": command => "/usr/bin/apt-get update" } Exec["apt-update"] -> Package <| |> Every time a package command is executed, the dependency (in our case 'apt-update') will be triggered fist. You can even define longer chains.
Puppet
10,845,864
52
I'm learning about puppet and trying to experiment with it on a VM at home. I'm not using a puppet server yet, just running things locally. It works okay, but every time I run puppet apply ..., I get a delay of several seconds, after which it displays the message warning: Could not retrieve fact fqdn I assume the message is linked to the delay, and I want to get rid of it (the delay--I can live with the message). Googling for a solution seems to indicate that it's somehow related to DNS lookups, but I can't really find anything else about it, which seems surprising. All I want is to be able to apply manifests in my vm quickly so I can experiment. How can I speed things up? Update: I don't see any extra info in the debug output, but it looks like this: $ puppet apply -dv puppet-1.pp warning: Could not retrieve fact fqdn debug: Failed to load library 'rubygems' for feature 'rubygems' debug: Failed to load library 'selinux' for feature 'selinux' debug: Puppet::Type::File::ProviderMicrosoft_windows: feature microsoft_windows is missing ... Update: I added the "ruby" tag because puppet has so few followers. If this doesn't belong in ruby, or if you know a better tag for it, let me know. Update again: Having learned some more about puppet, I now understand that this message is coming from the component called "Facter" that sniffs out "facts" about the system that Puppet is running on. I found some configuration options and played around with "certname", "node_name" and "node_name_value", but I couldn't get the delay to go away. Does anyone know specifically how to either tell Facter to ignore the fqdn or how to make Facter able to find the fqdn on an Ubuntu 11.10 vm? Progress: $ cat /etc/resolv.conf # Generated by NetworkManager nameserver 192.168.1.1 That's my router, which is running Dnsmasq via Tomato. $ dig -x 192.168.1.129 192.168.1.1 ; <<>> DiG 9.7.3 <<>> -x 192.168.1.129 192.168.1.1 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21838 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;129.1.168.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 129.1.168.192.in-addr.arpa. 0 IN PTR desk-vm-ubuntu-beta. ;; Query time: 14 msec ;; SERVER: 192.168.1.1#53(192.168.1.1) ;; WHEN: Sun Oct 16 17:47:47 2011 ;; MSG SIZE rcvd: 77 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27462 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;192.168.1.1. IN A ;; ANSWER SECTION: 192.168.1.1. 0 IN A 192.168.1.1 ;; Query time: 11 msec ;; SERVER: 192.168.1.1#53(192.168.1.1) ;; WHEN: Sun Oct 16 17:47:47 2011 ;; MSG SIZE rcvd: 45 strace led me to arp, which was blocking for 5 seconds and called twice for each facter: $ time arp -a ? (10.0.2.2) at 52:54:00:12:35:02 [ether] on eth0 real 0m5.127s user 0m0.004s sys 0m0.016s I changed the VM from NAT networking to bridged, so that it now has an IP on the network, and arp returns immediately now. (I'm no networking guru, so I have no idea why this worked, but it seemed a reasonable thing to try.) But facter still takes about 4-5 seconds total to run and still reports "Could not retrieve fact fqdn". facter -d shows several occurrences of "value for domain is still nil", all the way to the end. I'm thinking something still isn't quite right.
Since puppet uses the fqdn fact to determine which node it is running as, it may not be possible to run if it can't be determined. Given what you're describing, the simplest thing to debug is facter fqdn instead of your puppet command-line. If the "several seconds" is very close to exactly 5 seconds, it's very likely that your DNS configuration is broken with a single bad DNS server listed. What's in /etc/resolv.conf? What happens if you run dig -x $HOSTIP $DNSSERVERIP with the first nameserver listed in resolv.conf? If you look in facter/fqdn.rb you can see what exactly facter is trying to do to resolve the fqdn. In the version I have most handy it's using facter/hostname.rb and facter/domainname.rb which call code from facter/util/resolution.rb. Exactly what happens will depend on what version of facter you have, what OS, and possibly also what exactly you have installed. Calling /bin/hostname, uname (etc) and doing DNS lookups are all quite likely. You can always use strace -t facter fqdn to see what is taking the time (look for the gap in timestamps) From everything you've described, it does sound like the problem is that puppet/facter really wants to have a domain name and you don't have one, you just have a naked hostname. Adding domain example.com to /etc/resolv.conf should do the trick. Running hostname foo.example.com should also do the trick (but will need to be re-applied). Permanent solutions depend on the exact OS setup.
Puppet
7,780,322
48
I need to create a test user with a password using puppet. I've read that puppet cannot manage user passwords in a generic cross-platform way, which is a pity. I am doing this for Red Hat Enterprise Linux Server release 6.3. I do as follows: user { 'test_user': ensure => present, password => sha1('hello'), } puppet updates the password of the user, but Linux says login/pwd incorrect when I try to log in. It works (I can login) if I set the password manually in Linux with sudo passwd test_user, and then look at /etc/shadow and hardcode that value in puppet. something like: user { 'test_user': ensure => present, password => '$1$zi13KdCr$zJvdWm5h552P8b34AjxO11', } I've tried also by adding $1$ in front of the sha1('hello'), but it does not work either (note, $1$ stands for sha1). How to modify the first example to make it work (using the plaintext password in the puppet file)? P.S.: I am aware that I should use LDAP, or sshkeys, or something else, instead of hardcoding the user passwords in the puppet file. however, I am doing this only for running a puppet vagrant test, so it is ok to hardcode the user password.
Linux users have their passwords stored as hash in /etc/shadow file. Puppet passes the password supplied in the user type definition in the /etc/shadow file. Generate your hash password using openssl command: #openssl passwd -1 #Enter your password here Password: Verifying - Password: $1$HTQUGYUGYUGwsxQxCp3F/nGc4DCYM The previous example generate this hash: $1$HTQUGYUGYUGwsxQxCp3F/nGc4DCYM/ Add this hash password to your class as shown (do not forget the quotes) user { 'test_user': ensure => present, password => '$1$HTQUGYUGYUGwsxQxCp3F/nGc4DCYM/', }
Puppet
19,114,328
38
I'm using puppet to provision a vagrant (ubuntu based) virtual machine. In my script I need to: sudo apt-get build-dep python-lxml I know I can install the apt puppet module so I can use: apt::builddep { 'python-lxml': } But I can't find any reference about installing a module from the script and how to include/require it. Seems to me that the puppet docs refer only to installing from the command line puppet tool I also tried doing something like: define build_dep($pkgname){ exec { "builddepend_$pkgname": commmand => "sudo apt-get build-dep $pkgname"; } } build_dep{ "python-imaging": pkgname => "python-imaging"; "python-lxml": pkgname => "python-lxml"; } But puppet exited with an error on this. And also: exec{"install apt module": command => "puppet module install puppetlabs/apt" } class { 'apt': require => Exec["install apt module"]} include apt apt::builddep { 'python-imaging': } but got could not find declared class apt at.. any ideas? directions? I know I'm missing something obvious but can't figure this out. EDIT: If I pre-install (with puppet module install from the commandline) the apt:builddep works fine. But I need puppet to handle the module downloading and installation. Some of the other work arounds also work for the basic use case but won't answer my main question.
I ran into this problem as well. The trick is to download the modules using a vagrant shell command before the puppet provisioner runs. config.vm.provision :shell do |shell| shell.inline = "mkdir -p /etc/puppet/modules; puppet module install puppetlabs/nodejs; puppet module install puppetlabs/apache" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/manifests" puppet.manifest_file = "site.pp" end Order is important here, and since the puppet provisioner hasn't run the folder /etc/puppet/modules does not exist yet. The reason I decided, like alonisser, to install the modules using the puppet module tool instead of using a module folder with the vagrant puppet provisioner was because I didn't want to have to download all of the dependencies of the modules I was going to use and store all of those modules in my source control. Running these two commands results in 5 dependencies that would otherwise sit in my git repository taking up space.
Puppet
17,508,081
38
I have the following setup: Many different projects which are separate git repositories, but all have mostly the same server configuration Each project in turn depends on many other projects and we use the composer dependency manager to get them together (PHP language here). I want to use Vagrant and include a Vagrant file in each repository, so my team members can clone a repository, run vagrant up and be ready to go. My question is now directed towards the provisioning. I need to install several tools and packages like apache, git, mysql and several php packages, then download some files (like a recent development db dump), set everything up in /var/www and run the composer install command. So one option to do this is using a manager using recipes like chef or puppet. The alternative would be to write a bash file and use shell provisioning. I have not much experience with chef / puppet, so naturally, it seems easier to use the shell option, but I want to understand if this is not a good / viable option in the long run. Why to me it seems a bad approach to go with puppet / chef: I understand that I will have to use several different recipes and will almost always use the same recipes for my different repositories, so I would have to include all of them in all the repositories. Consider having 20 repos and needing 10 recipes, that means that I will need to add 200 recipes as a git-submodule or alike (also each team member needs to clone the repository, then clone 10 recipe repositories and only then run vagrant up for each project). In contrast, I would just need to have a small repo with my shell script and clone it 20 times. I am probably missing something, so please advice whether I should opt for chef / puppet and why it makes sense even if my repositories all have a very similar server setup.
The following article concerns yet another CM tool (ansible), but I think the author does an excellent job of explaining the benefits of transitioning away from shell scripts. http://devopsu.com/blog/ansible-vs-shell-scripts/ quote 1: What really surprised me was the response from some of these more famous devs. They basically said, "This is really cool, but I probably won't read it since my manual-install/shell-script workflow is fine for now." I was a little shocked, but once I thought about it for a few minutes, I realized that their choice was perfectly sane and rational given what they knew about CM tools. quote 2: For them, using a CM tool meant weeks of effort learning complex concepts, struggling with a complex installation process, and maintaining that complex system over time. They were somewhat aware of the benefits, but the costs of using a CM tool just seemed too high to make it worth the effort. The benefits over shell scripts are summarized at the end and I think they apply to all CM tools, puppet, chef, salt, ansible... Which method is most likely to end up in source control? Which method can be run multiple times safely with confidence? Which method can easily be run against multiple servers? Which method actually verifies (tests) your server for correctness? Which method can target certain servers easily (web, db, etc)? Which method supports easily templating your configuration files? Which method will grow to easily support your whole stack? Hope this helps.
Puppet
19,879,747
37
When I define(?) a resource e.g. to ensure dir structure, are there any loops available? Like that: for X in [app1,app2] do: file { '/opt/app/' + X: ensure => directory, owner => 'root', group => 'root', mode => '0644', } I have tens of directories and I am really tired with declaring it in puppet.. It would take 15 LOC of bash. Any ideas?
Older versions of the puppet language have no support for loops. But you can use an array instead of a simple string for the title and declare several resources at the same time with the same params: $b = '/opt/app' file { [ "$b/app1", "$b/app2" ]: ensure => directory, owner => 'root', group => 'root', mode => 0644, } You can also declare many resources of the same type with different params by ending each resource with a ;, which is a bit more compact than repeating the file and the {s and }s: file { [ "$b/app1", "$b/app2" ]: ensure => directory, owner => 'root', group => 'root', mode => 0755; [ "$b/app1/secret", "$b/app2/secret" ]: ensure => directory, owner => 'root', group => 'root', mode => 0700; } In the specific case of files, you can set up a source and use recursion: file { "/opt/app": source => "puppet:///appsmodule/appsdir", recurse => true; } (that would require having a source of that directory structure for puppet to use as the source) You can define a new resource type to reuse a portion of the param multiple times: define foo { file { "/tmp/app/${title}": ensure => directory, owner => 'root', mode => 0755; "/tmp/otherapp/${title}": ensure => link, target => "/tmp/app/${title}", require => File["/tmp/app/${title}"] } } foo { ["app1", "app2", "app3", "app4"]: } Starting with Puppet 2.6, there's a Ruby DSL available that has all the looping functionality you could ask for: http://www.puppetlabs.com/blog/ruby-dsl/ (I've never used it, however). In Puppet 3.2, they introduced some experimental loops, however those features may change or go away in later releases.
Puppet
6,399,922
37
Recently I started to read about building development environments with virtualization software (I am a beginner) and it seems that 'infrastructure as a code' is a really powerful concept. I really like the workflow structure described here: The same base VirtualBox image is used around the team Vagrant is used to quickly 'build up' and 'provision' such an image to a needed configuration with the help of Chef (or Puppet) recipes which is the only piece of code needed to be put under version control. However, I still do not quite understand how the code is transferred and deployed on Production servers. As I understand, the common way of keeping DEV and PROD environments identical is to manage the Production server instance as just another virtual image to be provisioned with Chef. I can have exactly the same OS installed on the Production server as I (and the team) use daily with VirtualBox-Vagrant-Chef. But the Production server can have hardware which differs from that in the virtual guest OS and this might lead to inconsistencies again. So, here is the question: What is the known and common best practice to transfer and deploy code to a Production server from a development environment which is managed with the VirtualBox-Vagrant-Chef toolchain? Does this practice allow any continuous deployment? [Edit]: Note: Is there any practice of running the same VM instance provisioned with Chef/Vagrant on the Production server, like it is depicted on this diagram?
I'm the author of the article you linked, so my 0.02 If I understood correctly your question, you don't move the vms from dev to production, you create a repeatable process that allows you to create the same end state (OS + config + app) over and over again, no matter where the destination is. By using vagrant you guarantee that your devs use the same OS that your production servers use no matter what OS they use for development. Using Puppet/Chef you guarantee that the OS is configured the same whether it is running in a vm with Vagrant, a vm in production, a cloud vm, or bare metal hardware. It doesn't need to be virtual.
Puppet
24,183,977
36
I know about puppet agent --disable "my message" --verbose but I would like to know at some point on a given machine, what is its puppet agent status. I don't see how to do it from man puppet-agent Is there an command that would tell me if the agent is enabled or disabled ? Thank you. - ------------------- EDIT CentOS release 6.6 (Final) bash-4.1$ puppet --version 3.7.4 bash-4.1$ file /usr/bin/puppet /usr/bin/puppet: a /usr/bin/ruby script text executable ------------------- EDIT2 Whether it is enabled or disabled, I always get this: [root@p1al25 ~]# cat `sudo puppet agent --configprint agent_catalog_run_lockfile` cat: /var/lib/puppet/state/agent_catalog_run.lock: No such file or directory [root@p1al25 ~]# puppet agent --disable "my message" [root@p1al25 ~]# cat `sudo puppet agent --configprint agent_catalog_run_lockfile` cat: /var/lib/puppet/state/agent_catalog_run.lock: No such file or directory [root@p1al25 ~]# service puppet status puppet (pid 4387) is running... ------------------- EDIT3 This one worked, thanks daxlerod [root@p1al25 ~]# service puppet status puppet (pid 4387) is running... [root@p1al25 ~]# puppet agent --disable "my message" --verbose Notice: Disabling Puppet. [root@p1al25 ~]# cat `puppet agent --configprint agent_disabled_lockfile` {"disabled_message":"reason not specified"}
A one-liner to get the current status is: cat `puppet agent --configprint agent_disabled_lockfile` Generally, this must be run as root, so I use: sudo cat `sudo puppet agent --configprint agent_disabled_lockfile` There are a number of possible results. cat: \path\to\lock: No such file or directory Puppet is not disabled. Any other text means that puppet is disabled, and the text is the reason provided when puppet was disabled by puppet agent --disable 'reason'
Puppet
29,350,031
34
file { 'leiningen': path => '/home/vagrant/bin/lein', ensure => 'file', mode => 'a+x', source => 'https://raw.github.com/technomancy/leiningen/stable/bin/lein', } was my idea, but Puppet doesn’t know http://. Is there something about puppet:// I have missed? Or if not, is there a way to declaratively fetch the file first and then use it as a local source?
Before Puppet 4.4, as per http://docs.puppetlabs.com/references/latest/type.html#file, the file source only accepts puppet:// or file:// URIs. As of Puppet 4.4+, your original code would be possible. If you're using an older version, one way to achieve what you want to do without pulling down the entire Git repository would be to use the exec resource to fetch the file. exec{'retrieve_leiningen': command => "/usr/bin/wget -q https://raw.github.com/technomancy/leiningen/stable/bin/lein -O /home/vagrant/bin/lein", creates => "/home/vagrant/bin/lein", } file{'/home/vagrant/bin/lein': mode => 0755, require => Exec["retrieve_leiningen"], } Although the use of exec is somewhat frowned upon, it can be used effectively to create your own types. For example, you could take the snippet above and create your own resource type. define remote_file($remote_location=undef, $mode='0644'){ exec{"retrieve_${title}": command => "/usr/bin/wget -q ${remote_location} -O ${title}", creates => $title, } file{$title: mode => $mode, require => Exec["retrieve_${title}"], } } remote_file{'/home/vagrant/bin/lein': remote_location => 'https://raw.github.com/technomancy/leiningen/stable/bin/lein', mode => '0755', }
Puppet
18,844,199
33
I'm trying to set up a non-default URL as part of a puppet script that installs Jenkins. I know how to edit the value via the web UI but I can't seem to find where the value is actually stored. I've looked through the jenkins_home folder and apache and have yet to find it.
It stores it in a rather unlikely place: hudson.tasks.Mailer.xml in Jenkins home folder.
Puppet
11,723,735
31
In puppet, if define command is > 80 characters, how can I wrap into two line to do it? exec { 'create_domain': command => "some command exceed 80 character...........................................................how to do how to do?.......", }
It's sort of ugly, but if the last character in a string is a \ followed by a newline, then the string is continued on the next line. My sample.pp manifest is below: exec { 'wrapped_string_example': command => "/bin/echo 12345678901234567890123456789012345678901234567890\ wrapped > /var/tmp/test.txt"; } Running this with puppet apply sample.pp gives the following output $ puppet apply sample.pp notice: /Stage[main]/Exec[wrapped_string_example]/returns: executed successfully notice: Finished catalog run in 0.10 seconds And catting the created file shows the lines have wrapped: $ cat /var/tmp/test.txt 12345678901234567890123456789012345678901234567890wrapped See https://github.com/puppetlabs/puppet/blob/9fbb36de/lib/puppet/parser/lexer.rb#L537 (as of Puppet v2.7.0) Also this is sort of a known issue: http://projects.puppetlabs.com/issues/5022
Puppet
11,406,234
29
Question Can anyone explain why it would be better to choose the puppet or chef vagrant provisioners, rather than the shell provisioner? Background I'm in the process of getting started with Vagrant. One of the things I'm having trouble with is deciding which provisioner to use. So far, I've had some success using the shell provisioner, but it has been more work than I expected to get it to run reliably. At the moment, I'm not familar with ruby, puppet or chef, but I'm happy to learn any or all of them if I have to. My early experience playing with puppet and chef is that if someone else has a recipe that does exactly what you want, it works really well, but doing something non-standard means falling back coding up solution in ruby. I'm aware of articles comparing puppet and chef, and I'm less worried about which of them to use, rather than knowing when and why I should use them at all.
Full disclosure: I'm a Puppet Labs employee. But I chose Puppet as a product over 2 years before joining them. I would recommend that you use Puppet or Chef over shell if your configurations are going to a) have any degree of complexity and b) going to change over time - or you expect your installation environment itself to change in a way that might alter the way your deployment performs. Your scripts may be very good, but ultimately, unless you are following terrific programming practices around them, testing and QA'ing them, etc they are going to fail at some point. There's an entire body of literate around DevOps discussing this notion, but it comes down to the principle of "technical debt" - we tend to do things the easy way now, and thus perceive them as simpler, at the cost of increasing complexity and difficulty later. One of Puppet's strengths is its deterministic nature - the manifest you write must be able to be programmatically transformed by Puppet into a model of the server you are building. This is perceived by people as being more "difficult" but I would argue that the difficulty is lessened if you average it out along the curve of your technology's lifecycle. In other words, Puppet forces you to do your thinking now, but then deploy to scale with ease, rather than thinking later and re-engineering as you go. Pay in cash now, rather than by credit, with interest, later. If you're purely pulling down other peoples' manifests, you're going to run into trouble at some point - although we would like it not to be so, working with Puppet today that's certainly the case, because they are writing them to address the general case, and not your particular system. Many general-purpose manifests become useful only when you reach a better understanding of Puppet. So rather than start there, I'd work my way through the excellent Learning Puppet guide to start to grasp the basics. Puppet's learning curve is steep, but it levels off after a short while. There are other reasons to use other provisioners or tools, but I'd surely argue that you are better with Puppet or Chef than trying to ensure that your shell scripts are doing exactly what you think they are supposed to do, for as long as you need to spawn new environments.
Puppet
7,747,328
29
I'm using puppet to configure servers. I want to print current machine (node) name in *.erb template. There is hostname variable, but this holds puppetmaster hostname. Is there any good reference/list regarding to this topic?
Seems like I have miss-looked somewhere. I can get(print) node-hostname simply by invoking following code in *.erb template: <%= @hostname %> Edit: As of Puppet 3, using local variables (i.e. hostname is deprecated; Puppet 4 will remove support for them entirely. The recommended way is to use instance variables (prefixed with @. So in this case, @hostname. Source
Puppet
4,428,153
29
Is it possible to do a string substitution/transformation in Puppet using a regular expression? If $hostname is "web1", I want $hostname_without_number to be "web". The following isn't valid Puppet syntax, but I think I need something like this: $hostname_without_number = $hostname.gsub(/\d+$/, '')
Yes, it is possible. Check the puppet function reference: http://docs.puppetlabs.com/references/2.7.3/function.html There's a regular expression substitution function built in. It probably calls the same underlying gsub function. $hostname_without_number = regsubst($hostname, '\d+$', '') Or if you prefer to actually call out to Ruby, you can use an inline ERB template: $hostname_without_number = inline_template('<%= hostname.gsub(/\d+$/, "") %>')
Puppet
10,418,104
26
I would like to iterate over an array that is stored as a Facter fact, and for each element of the array create a new system user and a directory, and finally make API calls to AWS. Example of the fact: my_env => [shared1,shared2,shared3] How can I iterate over an array in Puppet?
This might work, depending on what you are doing # Assuming fact my_env => [ shared1, shared2, shared3 ] define my_resource { file { "/var/tmp/$name": ensure => directory, mode => '0600', } user { $name: ensure => present, } } my_resource { $my_env: } It will work if your requirements are simple, if not, Puppet makes this very hard to do. The Puppet developers have irrational prejudices against iteration based on a misunderstanding about how declarative languages work. If this kind of resource doesn't work for you, perhaps you could give a better idea of which resource properties you are trying to set from your array? EDIT: With Puppet 4, this lamentable flaw was finally fixed. Current state of affairs documented here. As the documentation says, you'll find examples of the above solution in a lot of old code.
Puppet
12,958,114
24
I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following: $ puppet agent --no-daemonize --verbose --onetime **err: Could not request certificate: getaddrinfo: Name or service not known Exiting; failed to retrieve certificate and waitforcert is disabled** It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so: [main] server = puppet.<my domain> I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly. All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done. So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
Puppet
10,729,379
24
I don't understand even the basic difference between the services in the title. Do these services just provide software to help you configure/organize/manage your VM's, or do they also provide physical infrastructure for your VM's to run on? In other words, are they just convenient interfaces between developers and AWS, Rackspace, and Azure?
Not exactly. Chef/Puppet are the "same", they are configuration management. While you can use them to manage virtual machines or public/private clouds, most people don't tend to use them that way. They are configuration management. They typically come into play after a virtual machine is fired up to get them in a desired state. That is to say, what software is needed on the virtual machine, what users need to be added, what configuration is needed, etc. Thus, it tends to be used for scaling infrastructure. Vagrant, while also can be used to manage virtual machines and public/private clouds, is usually only used for one off environments. It provides a cohesive file for creating a virtual machine. It is similar to chef/puppet that way, but doesn't tend to be used at scale. Docker is a separate beast. It has several components, but primarily it is used for "bundling" (Note: it does much more than that, but that is an ELI5 answer) software and requires a host system (or infrastructure) to run on. It adds a little security to applications but mostly provides a consistent "OS" for an application to run on. In practice, all of these can be utilized in an environment. Here is an example: Say you have application FunTime. You have eight developers who contribute to this, and FunTime is designed to be ran on a scale-able infrastructure on AWS. It is designed to have a front-end (FunTime-Front) and a back-end (FunTime-API), and requires postgres. 4 developers work on the front end, four developers work on the backend. I would do the following (there are many ways to skin this cat, but this is one example): I would use Docker for FunTime-Front and FunTime-API. I would use Vagrant to set up a dev environment for the developers (so that they can tweak various components). Vagrant would: start up the VM locally (or on a cloud if needed), Install docker, pull down the docker images for FunTime-Front and FunTime-API, install postgres, and populate postgres with dummy data, configure network ports to the various components. Now the developer has the full FunTime stack on their local machine and doesn't have to screw around with configuring anything themselves: they can just type "vagrant up". On the infrastructure side, I would use chef (or puppet) to configure the Environments: Production, Stage, and Development (or whatever is needed), then chef would install docker on the "application" servers, "postgres" on the postgres servers, apply security settings, etc. In this way all the related servers are the same. If I needed to update a server or add a patch, it would be trivial with configuration management. In all cases Docker would be used so that there is no application difference between environments, including the developers work station. This would make sure that you don't hear the excuse "Well, it works on my local machine!" very often. In addition, if there is a bungled deployment, rolling back the application would be VERY easy with Docker. I hope that provides a little more insight into how they could be used.
Puppet
41,471,832
23
I am using vagrant with puppet to set up virtual machines for development environments. I would like to simply set a few environment variables in the .pp file. Using virtual box and a vagrant base box for Ubuntu 64 bit. I have this currently. $bar = 'bar' class foobar { exec { 'foobar': command => "export Foo=${bar}", } } but when provisioning I get an error: Could not find command 'export'. This seems like it should be simple enough am I missing some sort of require or path for the exec type? I noticed in the documentation there is an environment option to set up environment variables, should I be using that?
If you only need the variables available in the puppet run for all exec resources, whats wrong with : Exec { environment => [ "foo=$bar" ] } ?
Puppet
18,411,795
23
What is the proper way to check if a variable is undef in a puppet template? In the manifest the variable is defined as follows $myvar = undef How is this checked in the template? Is saw the following two variants <% if @myvar -%> <% end -%> and <% if not @myvar.nil? and @myvar -%> <% end -%> They both seem to work in my case, but I wonder if the first approach fails in on certain cases?
The Puppet documentation (at the time of writing this answer) explains it very well: https://puppet.com/docs/puppet/latest/lang_template_erb.html#concept-5365 Since undef is not the same as false, just using an if is not a good way to check for it. Also when a variable is defined, but has a value of false or nil it is also impossible to check with a simple if. This is why you want to use scope.lookupvar(β€˜variable’) and check its return value for :undef or :undefined (or nil) to know if it was set to undef, or never set at all.
Puppet
17,096,934
22
I am using powershell script to set some environment variable-- $env:FACTER_Variable_Name = $Variable_Value FACTER is for using these in the puppet scripts. My problem is - the variable name and variable value both are dynamic and getting read from a text file. I am trying to use $env:FACTER_$Variable_Name = $Variable_Value But $ is not acceptable syntax. When I enclose it in double quotes, the variable value is not getting passed. Any suggestion how to use it dynamically. Thanks in Advance
On Powershell 5, to set dynamically an environment variable in the current shell I use Set-Item: >$VarName="hello" >Set-Item "env:$VarName" world once the variable is set, you can get its value like so: >$env:hello world and of course to persist the variable I use C# [Environment]::SetEnvironmentVariable("$VarName", "world", "User")
Puppet
30,911,306
21
I am having a weird issue with having puppet enforce the package nc. I installed it manually in the end via: yum install nc I see puppet does it via: /usr/bin/yum -d 0 -e 0 -y list nc Returns: Error: No matching Packages to list I have tested this by command line as well: yum list nc Returns Error: No matching Packages to list Yet, when I do: yum install nc Returns: Package 2:nmap-ncat-6.40-4.el7.x86_64 already installed and latest version What am I missing?
Nc is a link to nmap-ncat. It would be nice to use nmap-ncat in your puppet, because NC is a virtual name of nmap-ncat. Puppet cannot understand the links/virtualnames your puppet should be: package { 'nmap-ncat': ensure => installed; }
Puppet
29,045,877
21
I want to create the directory structure /var/www/apps/example/current/public if it doesn't exist using puppet. If it already exists I don't want to purge the contents of the directories. How do I do this? Below is what I have so far: file { "/var/www/apps/example/current/public": owner => 'deploy', group => 'users', ensure => "directory", purge => false, recurse => true } This gives me Cannot create /var/www/apps/example/current/public; parent directory /var/www/apps/example/current does not exist
The recurse parameter does not allow you to create parent directories. It is used to enforce property values such as owner, mode etc. on directory contents and subdirectories recursively. file { '/var/www': owner => 'www-data', recurse => true, } As a matter of fact, Puppet currently cannot automatically create all parent directories. You should add all relevant directories as resources instead. file { [ '/var/www/apps', '/var/www/apps/example', '/var/www/apps/example/current', '/var/www/apps/example/current/public', ]: ensure => directory, ... } Existing content will remain unmolested. There is no need to pass the purge parameter.
Puppet
26,433,115
21
I'm writing some puppet modules and have a package defined in two modules hence get the following error: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Package[gnome-session-fallback] is already defined in file /etc/puppet/modules/vnc4server/manifests/init.pp at line 3; cannot redefine at /etc/puppet/modules/vino/manifests/init.pp:7 on node l Hence want to ensure that the package has not already been defined but the following does not work: if ! defined ('gnome-session-fallback') { package { 'gnome-session-fallback': ensure => installed, } } Can anyone suggest how to fix this, and on the broader scale, what is the "proper" approach to avoiding clashes such as this in modules?
You are missing Package[] inside defined(). The correct way to do it: if ! defined(Package['gnome-session-fallback']) { package { 'gnome-session-fallback': ensure => installed, } }
Puppet
15,266,347
20
I am trying to source files from local modules in a puppet manifest (using puppet in standalone mode): file { '/home/repowt/.crontab': ensure => present, source => 'puppet:///modules/site/crontab'; } but I get: Could not evaluate: Could not retrieve information from source(s) ... The file is in: config/puppet/modules/site/files/crontab (puppet is called via vagrant provision and the Vagrantfile specifies module_path='config/puppet/modules' and is clearly ok since puppet does load modules with import from there.) I also tried: source => 'puppet:///site/crontab' source => 'site/crontab' source => 'config/puppet/modules/site/files/crontab' source => '/modules/site/crontab' of no avail. I found nothing illuminating on the web, seems like something very simple. your help is appreciated.
There are a couple of things going on here. First, as pwan notes, the fileserver.conf needs to be setup correctly. Keeping in mind that /vagrant contains the directory where Vagrantfile is (and therefore all of it content), that meant for me doing: vm_config.vm.provision :puppet, :module_path => "modules", :options => ["--fileserverconfig=/vagrant/fileserver.conf", ] My fileserver.conf specifies that /etc/puppet/files is to be used. Whilst I could have specified a different fileserver.conf, just for Vagrant, I wanted pretty much everything to be the same as normal. So, I also mounted /etc/puppet/files too, with vm_config.vm.share_folder "files", "/etc/puppet/files", "files" Which got things working for me.
Puppet
7,216,375
19
I run some containers with the option --restart always. It works good, so good, that I have now difficulties to stop these containers now :) I tried : sudo docker stop container && sudo docker rm -f container But the container still restarts. The docker documentation explains the restart policies, but I didn't find anything to resolve this issue.
Just sudo docker rm -f container will kill the process if it is running and remove the container, in one step. That said, I couldn't replicate the symptoms you described. If I run with --restart=always, docker stop will stop the process and it remains stopped. I am using Docker version 1.3.1.
Puppet
27,283,131
18
What is the best way to store and handle sensitive information with puppet and safely distribute it to your nodes? The version I am using is 2.7. One example would be database passwords. Plain text passwords are needed on your application servers. How can one store these without leaving them lying around inside of the puppet scripts?
Using Hiera for external data lookups and encrypting that data via eyaml or GPG is a good start. https://docs.puppet.com/hiera/ https://puppet.com/blog/encrypt-your-data-using-hiera-eyaml http://leebriggs.co.uk/blog/2016/11/15/using-hiera-eyaml-gpg.html
Puppet
11,171,472
18
I just came cross puppet inheritance lately. A few questions around it: is it a good practice to use puppet inheritance? I've been told by some of the experienced puppet colleagues Inheritance in puppet is not very good, I was not quite convinced. Coming from OO world, I really want to understand under the cover, how puppet inheritance works, how overriding works as well.
That depends, as there are two types of inheritance and you don't mention which you mean. Node inheritance: inheriting from one node fqdn { } definition to another. This in particular is strongly recommended against, because it tends to fail the principle of least surprise. The classic example that catches people out is this: node base { $mta_config = "main.cf.normal" include mta::postfix # uses $mta_config internally } node mailserver inherits base { $mta_config = "main.cf.mailserver" } The $mta_config variable is evaluated in the base scope, so the "override" that is being attempted in the mailserver doesn't work. There's no way to directly influence what's in the parent node, so there's little benefit over composition. This example would be fixed by removing the inheritance and including mta::postfix (or another "common"/"base" class) from both. You could then use parameterised classes too. Class inheritance: the use for class inheritance is that you can override parameters on resources defined in a parent class. Reimplementing the above example this way, we get: class mta::postfix { file { "/etc/postfix/main.cf": source => "puppet:///modules/mta/main.cf.normal", } service { ... } } class mta::postfix::server inherits mta::postfix { File["/etc/postfix/main.cf"]: source => "puppet:///modules/mta/main.cf.server", } # other config... } This does work, but I'd avoid going more than one level of inheritance deep as it becomes a headache to maintain. In both of these examples though, they're easily improved by specifying the data ahead of time (via an ENC) or querying data inline via extlookup or hiera. Hopefully the above examples help. Class inheritance allows for overriding of parameters only - you can't remove previously defined resources (a common question). Always refer to the resource with a capitalised type name (file { ..: } would become File[..]). Also useful is that you can also define parameters to be undef, effectively unsetting them.
Puppet
11,154,090
18
I'd like to ask about when and in what circumstances you'd use puppet and when you'd use chef. I've also found rump which is a puppet-solo type of thing where you iterate a single server to its configuration and then push that to a series of servers, allowing you to see changes directly. My question: which of the above should I use and in what ways? Could someone help me? My aim is in the context of continuous integration, continuous deployment in a mono/.Net environment with rake and git. I would like to package, version and deploy web applications easily and would like to use recepies for a load balancer for multiple web servers. Being able to take these down quickly and not have any downtime in between upgrades.
Having used both, I would say that it depends on what you look for. In my opinion: Chef is more developer-oriented. If you're a Ruby guru, you'll love it. Puppet is more sysadmin-oriented. It has a non-ruby DSL so it's more difficult to propagate mistakes to your machines (imho). Puppet creates more readable and stable code but it's also slow to deploy new features. That's probably what you'll want in a big enterprise structure which strongly believes in your DevOps work. With Chef you can achieve complex tasks with less code, time effort. You can use all the ruby magic without having to create a Puppet construct. This is good, for instance, when your Company doesn't truly believes in DevOps value and you're constantly struggling against time to prove your manager wrong :-) I personally find Puppet a bit slower to execute when you develop new features, which can be a bit of a pain. My suggestion is: if you're a sysadmin with some development skills, go for Puppet. If you're good with Ruby (or Python), go for Chef. I also tried rump and I'm playing with it. It helps, it's cool, but I still don't see a huge value except lazy typing of rump go instead of puppet apply -vd --modulepath=. module/manifests/init.pp. :)
Puppet
6,403,894
18
I am new to Puppet and I am writing a module to setup configuration files. The problem is when multiple clients will be using my module they will have to edit it according to their system. I have heard that templates are way to solve this problem. But I am not able to get it how to use a template for setting up configuration file. If anyone of you can give me a simple to follow example using templates to configure files would be really helpful. For example how can i setup Apache sites-available default configuration file using template, or give any other example you feel will help a new puppet user. BTW I am on Ubuntu machine.
The PuppetLabs docs on Using templates has an example of an Apache configuration for a Trac site. This should be enough to get you started. Per OP's request, here's a simple example. I'm using NTP rather than the Apache default config since that's a fairly large and complex file. NTP is much simpler. Directory looks like this: /etc/puppet/modules/ntp/manifests /templates Partial contents /etc/puppet/modules/ntp/manifests/init.pp (just the portion defining the template): $ntp_server_suffix = ".ubuntu.pool.ntp.org" file { '/etc/ntp.conf': content => template('ntp/ntp.conf.erb'), owner => root, group => root, mode => 644, } Contents of /etc/puppet/modules/ntp/templates/ntp.conf.erb: driftfile /var/lib/ntp/drift <% [1,2].each do |n| -%> server <%=n-%><%=@ntp_server_suffix%> <% end -%> restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery restrict 127.0.0.1 When run with puppet this will result in an /etc/ntp.conf that looks like: driftfile /var/lib/ntp/drift server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery restrict 127.0.0.1 This demonstrates a few different concepts: Variables defined in the puppet manifest (such as $ntp_server_suffix can be accessed as instance variables (@ntp_server_suffix) in the template Loops and other ruby code can be used in erb templates Code between <% and %> is executed by ruby Code between <%= and %> is executed and output by ruby Code between <%= and -%> is executed and output by ruby and the trailing newline character is suppressed. Hope this helps you understand templates.
Puppet
22,948,509
17
I'm just getting started with Puppet. The example walkthroughs and tutorials were good at helping me understand Puppet's usefulness and the basic toolset, but I'm having a hard time conceptualizing a full stack. Even the advanced tutorial didn't seem to give me a clear picture of what needs to happen. Are there any full examples of a rails stack somewhere that I could learn from?
Examples of a full stack are hard to come by. You should be able to find examples of modules that manage some of those specific examples, however. One problem is that it can be a lot of extra work to create a module that has abstracted away all site-specific assumptions and that is truly cross-platform. http://forge.puppetlabs.com/ is the canonical location for modules that people wish to share. With a quick scan I found modules for nginx, varnish, and postgres. You'll want to start with the Puppet Best Practices for the basic setup. From there, you're going to (at least), want a module for nginx, varnish, thin, postgres, memcached, redis, and a site module (probably named after your site). In your nodes.pp, each system will have a fairly simple assignment to a role. ("include role") In your "site" module, you'll want a sub-class for each system role (I'm assuming you'll have multiple sets of servers, and that within a set, they are intended to be basically identical to each other. I'm also assuming that you're likely to have more than one of the above included). You may also want a site::commonvariables class (or something like that) for things (such as lists of servers in a role, passwords, etc) that you may need across multiple other modules or classes. The best practices seem to have these site::role things in a /services secondary module area with names more like s_role, so you may want to follow that naming/placement scheme instead. These role classes will include the classes for the actual components that are needed on those roles, call defines, etc. For each of the 6 components you mention, you'll have a module. Within that module, you're likely to want to have something like a "server" and "client" subclass. And possibly a third class included by client and server for things needed by both (common libraries, etc). And within the server subclass, a define that sets up specific instances (virtualhosts, databases, etc). (if it's absolutely only ever a server, maybe skip that level of subclassing). So, for example: postgres module (manifests, files, templates, etc) postgres class (in init.pp): maybe empty class, maybe things needed by client and server postgres::client class: install postgres client libraries postgres::server class: install postgres server code, make sure postgres service is running, configure it, set up backups, etc postgres::server::database define: inside the server class, a define that takes parameters such as database name, username, password, and creates the database and user and gives the user access to the DB. Maybe this is two or three separate defines, depending on how you prefer to model things. It's best if the component modules are kept fairly independent (and reusable) and your role classes is where all the more site-specific configuration happens, but it's not the end of the world if your component modules include some site-specific stuff.
Puppet
5,784,264
17
I have a class definition which requires the build-essential package: class erlang($version = '17.3') { package { "build-essential": ensure => installed } ... } Another class in a different module also requires the build-essential package: class icu { package { "build-essential": ensure => installed } ... } However, when I try to perform puppet apply, the error I receive is: Error: Duplicate declaration: Package[build-essential] is already declared in file /vagrant/modules/erlang/manifests/init.pp:18; cannot redeclare at /vagrant/modules/libicu/manifests/init.pp:17 on node vagrant-ubuntu-trusty-64.home I was expecting classes to encapsulate the resources they use but this doesn't seem to be the case? How can I resolve this clash?
This is common question when dealing with multiple modules. There's a number of ways of doing this, the best practise is to modularise and allow the installation of build essential as a parameter: class icu ($manage_buildessential = false){ if ($manage_buildessential == true) { package { "build-essential": ensure => installed } } } Then, where you want to include your ICU class: class {'icu': manage_buildessential => 'false', } However, for a quick and dirty fix: if ! defined(Package['build-essential']) { package { 'build-essential': ensure => installed } } Or if you have puppetlabs-stdlib module: ensure_packages('build-essential')
Puppet
26,205,727
16
I am trying to install a particular rpm using puppet, my init.pp is: class nmap { package {'nmap': provider => 'rpm', source => "<Local PATH to the RPM>", } } and the rpm is in ...modules/nmap/files If i move the rpm to manifests, and provide the rpm name in source => '' class nmap { package {'nmap': provider => 'rpm', source => "rpm-name.rpm", } } it works, but how can i specify source path with ../files/ and do puppet apply successfully When i use : source => 'puppet:///files/nmap-6.45-1.x86_64.rpm', i get an error: Debug: Executing '/bin/rpm -i puppet:///files/nmap-6.45-1.x86_64.rpm' Error: Execution of '/bin/rpm -i puppet:///files/nmap-6.45-1.x86_64.rpm' returned 1: error: open of puppet:///files/nmap-6.45-1.x86_64.rpm failed: No such file or directory Error: /Stage[main]/Nmap/Package[nmap]/ensure: change from absent to present failed: Execution of '/bin/rpm -i puppet:///files/nmap-6.45-1.x86_64.rpm' returned 1: error: open of puppet:///files/nmap-6.45-1.x86_64.rpm failed: No such file or directory ` when running the command: sudo puppet apply --modulepath=/home/user1/qa/puppet_qa/modules/ -e "include nmap" --debug
Unlike the file resource type, the package type has no support for Puppet fileserver URLs. You will need to use a file resource to download the rpm prior to installing it. If this is a recurring problem for you, make a defined type that does those in one go (think macros), e.g. define fileserver_package($source, $ensure='installed') { file { "/my/tmp/dir/$name.rpm": source => $source } package { $name: ensure => $ensure, provider => 'rpm', source => "/my/tmp/dir/$name.rpm", require => File["/my/tmp/dir/$name.rpm"], } } Edit: it is generally advisable to use a local yum repo instead, see also the first comment by @rojs below.
Puppet
23,179,577
16
This is hopefully a quick one to answer, I'm trying to provision a box on AWS with puppet and one of the steps involves a pip install from a requirements file. Something like this: - /usr/local/venv/ostcms/bin/pip install -r /vagrant/requirements.txt The step basically fails because it can't find any of the packages in the requirements file, but when I open the AWS box's security group up to allow "All Traffic" the pip step works. I'm trying to find the port that pip uses so I can basically have that port, http and ssh open on the box and live happily ever after.
Pip runs on 3128 so make sure you have that open in your AWS console. Otherwise pip will get blocked when attempting to talk to PyPi (or anywhere else it cares to download from).
Puppet
22,377,175
16
I have this in my Vagrantfile: Vagrant.configure("2") do |config| config.vm.provision "puppet" end Yet, when I run puppet --version I get : [vagrant@vagrant-centos65 ~]$ puppet --version -bash: puppet: command not found Do I need to manually install puppet?
No, (at the moment) Vagrant doesn't install it automatically. So you either need to use a basebox which already has it installed (Puppet Labs provides boxes too), or you need to install it yourself. Probably the easiest way to install is to use shell provisioner before the puppet provisioner(s).
Puppet
21,589,388
16
I need some help with erb templates, I can't seem to get my head around passing an array and then iterating over it. My problem is this. I want to pass a few arrays: ` device => ["eth0", "br0"], ipaddr => ["192.168.12.166", "192.168.12.199"], netmask => ["255.255.255.0", "255.255.255.0"], hwaddr => '', network => '', gateway => ["192.168.12.254", "192.168.12.204"], To a template that iterates over each item in the array and prints it out: auto <%= device %> inet static address <%= ipaddr %> netmask <%= netmask %> broadcast <%= broadcast %> gateway <%= gateway %> As far as I can get so far is figuring out that I need to do something with device.each |device| puts device, but I don't know what the syntax is supposed to look like. I believe you can tell what I'm trying to do from these snippets, and then you might understand that the entries need to be seperate, and not interpolated. Any help you can offer would be appreciated. I know I should be trying things out in irb and figuring them out from there, which is what I'm reading up on now. Thanks!
the basic syntax for using each in ruby is something like this: array.each do |item_from_array| BLOCK so if you only had one array then you could just do something like this: (I would use a different name inside the vertical bars for clarity) <% device.each do |dev| %> auto <%= dev %> inet static <% end %> However that would iterate over all of your devices first, before moving on to your ipaddr array. I'm guessing you want them each in turn auto, address, netmask, etc. In that case you'd be better off using a more 'traditional' index and looping through N times, like this: <% for idx in (0..1) %> auto <%= device[idx] %> inet static address <%= address[idx] %> netmask <%= netmask[idx] %> broadcast <%= broadcast[idx] %> <% end %> Of course you need to think about what your maximum size of array is, and what to do if an array contains less entries than the others. You can find the maximum size of all the arrays by doing something like this: [device,address,netmask,broadcast].map{|a| a.length}.max and you can skip over a particular array like this: <% if idx < address.length %> address <%= address[idx] %><% end %>
Puppet
7,079,857
16
this puppet manifest will remove the file /etc/file.txt if it exists: file { "/etc/file.txt": ensure => absent, } how to tell puppet to remove all files /etc/*.txt? according to the reference, it seems that puppet file does not allow wildcards. https://puppet.com/docs/puppet/latest/types/file.html ps: I am aware that I could execute a script from puppet, but I would prefer another more elegant way.
There's a built-in type for this called 'tidy', which allows you to specify a file glob pattern of files to remove. Check it out at https://puppet.com/docs/puppet/latest/types/tidy.html.
Puppet
18,874,716
15
I have a private network VM for developing on my mac. I'd like for my android device to be able to communicate with the VM on my mac. Currently I can visit the IP defined in my Vagrantfile, 10.10.10.10, on my mac and access it just fine but I can't access it via my phone on the same wifi. What do I need to do to make it available across my local network and visible to my phone over wifi? Here's my Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "precise64" config.vm.box_url = "http://files.vagrantup.com/precise64.box" config.vm.network :private_network, ip: "10.10.10.10" config.ssh.forward_agent = true config.vm.provider :virtualbox do |v| v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] v.customize ["modifyvm", :id, "--memory", 1024] v.customize ["modifyvm", :id, "--name", "PHPBoxWith54"] end nfs_setting = RUBY_PLATFORM =~ /darwin/ || RUBY_PLATFORM =~ /linux/ config.vm.synced_folder "./", "/var/www", id: "vagrant-root" , :nfs => nfs_setting config.vm.provision :shell, :inline => "if [[ ! -f /apt-get-run ]]; then sudo apt-get update && sudo touch /apt-get-run; fi" config.vm.provision :shell, :inline => 'echo -e "mysql_root_password=root controluser_password=awesome" > /etc/phpmyadmin.facts;' config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end
You are using a Private Network IP which is only accessible by the Host machine (NOT visible to other machines even they are in the same WLAN). In your case, the best choice is to use Public Network (bridged) so that your Android device can access it. add config.vm.network "public_network" in your Vagrant file in the config block. BTW: the default NAT mode is fine but you'll have to set proper port forwarding rules for each service you want to access (e.g. SSH, HTTP, HTTPS etc...).
Puppet
18,689,014
15
Primary goal is to add all puppet modules automatically, so that all dev-env's and prod-env could be started with one command. How can I install puppet modules through puppet manifest?
We've been happily using librarian-puppet to sync all 3rd party modules, it supports setting the modules' locations and versions. So production and dev run the exact same code. The usage is one liner librarian-puppet install In other cases we have a shell script that runs puppet two times, one time a minimal module that is only responsible for fetching the required modules, and then the full blown puppet flow when all modules are available.
Puppet
16,774,980
15
just starting with Puppet, really new to this world. I have CentOS 6 Puppet Master CentOS 6 Puppet Client In Master have one module: puppet module list /etc/puppet/modules Γ’Γ’Γ’ mstanislav-yum (v1.0.0) So I want to apply same module to my puppet client but I can't or I don't know why root@puppetclient: puppet agent --test Info: Retrieving plugin Info: Caching catalog for puppetclient Info: Applying configuration version '1355737643' Finished catalog run in 0.10 seconds but there are not any changes to the client :-/ Any idea?
You haven't declared the module (assigned it to your node) yet... Add this to site.pp: node 'fqdn of client' { include yum } Then, you can run puppet agent -t to see it in action.
Puppet
13,911,798
15
2021 update Today I'm using Ansible for this and other devops tasks. Along the way I've experimented with Chef, Puppet, Saltstack and Docker images, but I've found that for me, as a solo developer working in smaller projects, a lightweight tool like Ansible is a great fit. Original question (from 2011) I'm starting out as an independent web developer and I recently scared away a client by giving a quote for a rather simple site that was quite a bit higher than they expected. It was basically brochureware that they could have done themselves in a hosted solution like Wordpress.com or Google sites. Except for one critical feature, that made me think that Django would be a nice fit, and that some proper web development could be motivated. When looking at the tasks I've listed in my rough estimate it's pretty obvious to me that most of the hours in there are stuff that is not specific to this clients website. This got me thinking that I should have a script to automate the process of setting up and deploying new Django projects. A couple of things would still have to be done manually like: Set up a new VPS (or in most cases just go with virtualhosts on an existing VPS). Edit DNS settings to point clientsdomain.com and test.clientsdomain.com to the new VPS. Set up SSH-keys Edit a config file. The script would then do stuff like: Install some required packages on the new VPS Install default a iptables firewall on the new VPS Add nginx/apache config for clientsdomain.com and test.clientsdomain.com Set up new up databases for production and test on the VPS and for development on my local machine. Create a new Django project and put it on Bitbucket. Set up settings.py and local_settings.py for production, test and development Set up offsite backuproutines for production database and uploaded files directory. Enable some default apps: flatpages, admin Add some boilerplate templates, a couple of boilerplate flatpages and a navbar. Add the 960.gs CSS-framework Add jQuery Set up djapian or Haystack (including cron job for automatic updating) for search. Set up a new Google analytics profile. Include possibility to replicate databases between production, test and development Set up Pingdom monitoring I've used Rake and Fabric for somewhat similar (but less ambitious) stuff in the past and I'm thinking that Fabric might be a pretty good fit for this task as well, but I'd still like your input. Are there other tools I should look into? I've heard good things about Puppet but just looking at their site (it contains the word Enterprise ) gives me the feeling that it might be overkill for a one man operation.
Puppet may seem daunting and overkill for small projects since it's so often used for huge deployments, but I use it to manage just one machine in standalone mode without a client server setup so that I don't have to deal with SSL certs and multiple machines, which keeps things a lot simpler, but still gives me the benefit that I can do really fast disaster recovery or move my hosting without a lot of effort. There's some great reasons (idempotency, cross platform support, full lifecycle management, abstraction, concise DSLs) for using modern configuration management systems over systems that are essentially scripts that do ssh in a loop or relying on platforms that lock you in. Check out learning puppet for a quick ramp up including examples and a VM playground. You can get really useful things done with simple Puppet scripts (manifests) that run standalone, and then start learning all the advanced features once you need them. Another nice thing is that a lot of Puppet manifests and modules have already been written by others, and they're shared on the Puppet Forge and by many other advanced Puppet users.
Puppet
5,485,352
15
I have been using http://www.puphpet.com successfully to generate vagrant+puppet environments for a number of projects. Then this week I got tasked with writing a prototype for a project using Laravel 4. Since I'm not going to be the one working on the project full time, I figured it would be best to make a VM environment for it that the next person can just clone for the repo. Not having much experience with Laravel 4 I got everything to run in the dev environment just fine. Then I tried to run the first migration and here the problems start with the app/storage file permissions. 1. app/storage must be writable by the web user Fine, took out id: vagrant from the synced folder provisioning and set the owner & group to www-data like so: config.vm.synced_folder "./www", "/var/www", owner: "www-data", group: "www-data" 2. Artisan can only be run from inside the vagrant box to have access to the DB Fine, vagrant ssh and run artisan from the www folder. 3. app/storage & app/database have to be writable by the vagrant user in order to use migrations Grrr, ok, added the following awful piece of code to the vagrant file (note, tried to do this in Puppet first and it didn't take): config.vm.provision :shell, :inline => "usermod -a -G www-data vagrant" 4. app/storage & app/database are not writeable by the group Argh!!! Ok, let's try this Puppet directive: file { "/var/www/app/storage": source => "/var/www/app/storage/", mode => 0775, ensure => 'directory', owner => 'www-data', group => 'www-data', recurse => true } Nope, doesn't work. Tried to do the same with the Puppet exec {} directive to no effect. It seems that permissions for the vagrant synced folder are set by the host machine, not the guest. Finally ended up manually changing the permissions for the folder in the host machine. Is there any simpler way to do this? I would really just like to be able to give the next dev a worry free environment they can clone from the repo, not have them re-setup everything after cloning. UPDATE We've figured out that if we change the Apache run user, vagrant doesn't override it on reload. So we've done that manually and it's working better than changing the synced folder's permissions & owner. Now we're just trying to figure out how to make that change manually in Puppet.
After some discussion on Twitter, figured out the following: There's a constraint from VirtualBox on vagrant that does not allow you to set permissions for the synced folder from inside the guest OS. See this issue on github. You can use the following code to set the synced folder permissions from the vagrant file: config.vm.synced_folder ".", "/vagrant", :mount_options => ["dmode=777","fmode=666"] Or you can change the Apache runtime user to vagrant from the puppet manifest like so: exec { "change_httpd_user": command => "sed -i 's/www-data/vagrant/g' /etc/apache2/envvars", onlyif => "/bin/grep -q 'www-data' '/etc/apache2/envvars'", notify => Service['apache2'], require => Package['apache2'], } file { "/var/lock/apache2": ensure => "directory", owner => "vagrant", group => "vagrant", require => Exec['change_httpd_user'], } Or any combination of the above
Puppet
18,648,547
14
I am using Puppet 2.7 and I need to convert an array to comma separated list. $hosts_fqdn= ['host1','host2','host3'] And I need to convert it to desired result: 'host1,host2,host3' I guess that Puppet 3.2 offers lambda expression - reduce. But unfortunately that is not possible with 2.7.
Function join from puppetlabs/stdlib: join($hosts_fqdn,',')
Puppet
18,526,456
14
Is there a way to check in manifest files if a given class exists? I want to do something like this: class foo { if exists( Class["foo::${lsbdistcodename}"] ) { include foo::${lsbdistcodename} } } So I can easily add distrubution / version specific classes which are then automatically included.
You should use defined instead of exists statement. The following snippet works for me: class foo { if defined( "foo::${lsbdistcodename}") { notify {'defined':} include "foo::${lsbdistcodename}" } } class foo::precise { notify{'precise':} } [assuming you're running puppet version > 2.6.0]
Puppet
15,096,706
14
I have a cluster of 500 linux boxes which now need to use the mount resource with the bind option (see man 8 mount) to support a chroot jail. The mount points need to be enforced and maintained after boot. I am unsure how to describe this state with puppet. Is it like this? mount { "/gpfs20/home": ensure => mounted, name => "/chroot/centos5/home", fstype => "none", options => "(rw,bind)", } TIA -- Charles
For the record it is done this way: mount { '/chroot/centos5/home': ensure => mounted, device => '/gpfs20/home', fstype => 'none', options => 'rw,bind', } ~Charles~
Puppet
11,139,569
14
I am creating a script which need to parse the yaml output that the puppet outputs. When I does a request agains example https://puppet:8140/production/catalog/my.testserver.no I will get some yaml back that looks something like: --- &id001 !ruby/object:Puppet::Resource::Catalog aliases: {} applying: false classes: - s_baseconfig ... edges: - &id111 !ruby/object:Puppet::Relationship source: &id047 !ruby/object:Puppet::Resource catalog: *id001 exported: and so on... The problem is when I do an yaml.load(yamlstream), I will get an error like: yaml.constructor.ConstructorError: could not determine a constructor for the tag '!ruby/object:Puppet::Resource::Catalog' in "<string>", line 1, column 5: --- &id001 !ruby/object:Puppet::Reso ... ^ As far as I know, this &id001 part is supported in yaml. Is there any way around this? Can I tell the yaml parser to ignore them? I only need a couple of lines from the yaml stream, maybe regex is my friend here? Anyone done any yaml cleanup regexes before? You can get the yaml output with curl like: curl --cert /var/lib/puppet/ssl/certs/$(hostname).pem --key /var/lib/puppet/ssl/private_keys/$(hostname).pem --cacert /var/lib/puppet/ssl/certs/ca.pem -H 'Accept: yaml' https://puppet:8140/production/catalog/$(hostname) I also found some info about this in the puppet mailinglist @ http://www.mail-archive.com/[email protected]/msg24143.html. But I cant get it to work correctly...
I have emailed Kirill Simonov, the creator of PyYAML, to get help to parse Puppet YAML file. He gladly helped with the following code. This code is for parsing Puppet log, but I'm sure you can modify it to parse other Puppet YAML file. The idea is to create the correct loader for the Ruby object, then PyYAML can read the data after that. Here goes: #!/usr/bin/env python import yaml def construct_ruby_object(loader, suffix, node): return loader.construct_yaml_map(node) def construct_ruby_sym(loader, node): return loader.construct_yaml_str(node) yaml.add_multi_constructor(u"!ruby/object:", construct_ruby_object) yaml.add_constructor(u"!ruby/sym", construct_ruby_sym) stream = file('201203130939.yaml','r') mydata = yaml.load(stream) print mydata
Puppet
8,357,650
14
I know the basics of ordering in puppet to run apt-get update before a specific package but would like to specify to just run apt-get update only once and then execute the rest of the puppet file. Is that possible? All of the ways listed Here need to either run apt-get before every package or use arrows or requires to specify each package.
This would be my recommendation from that list: exec { "apt-update": command => "/usr/bin/apt-get update" } Exec["apt-update"] -> Package <| |> This will ensure that the exec is run before any package, not that the exec is run before each package. In fact, any resource in puppet will only ever be executed at most once per puppet run. But if you're wanting the exec to occur before ANY type of resource I guess you could do something like: exec { "apt-update": command => "/usr/bin/apt-get update", before => Stage["main"], } The "main" Stage is the default stage for each resource, so this would make the exec occur before anything else. I hope that this helps.
Puppet
17,689,180
13
I am new to puppet and while going through puppet courses, I found one person using 'puppet agent -t' command to configure an agent node while in another course, the instructor using 'puppet apply' command. What is the difference between these two commands?
These are: puppet apply - applies or "executes" Puppet code on the local machine. puppet agent -t also sometimes written puppet agent --test - calls the Puppet Agent to retrieve a catalog (compiled Puppet code) from a Puppet Master, and then applies it locally and immediately. Note that -t is badly-named, and it may originally have been intended for "testing" but in fact it is not a "test" mode at all, but will make changes to your machine. See also puppet agent --noop for the real "test" (dry-run) mode.
Puppet
53,371,289
12
Anaconda python is installed (in linux) via a bash script. I am trying to use Vagrant provisioning to get Anacaonda Python installed. In the bash script (following the documentation bootstrap.sh example) I have a bootstrap.sh script that: wget the install script chmod +x to make it executable ./<script>.sh to install. Installing this way fails as the installation has a few prompts, one of which requires the non-default answer. Is it possible to automate the installation via a bash script? If not, is it necessary to use something like Puppet? I do not know Puppet at all, so have tried to avoid using...perhaps it is time to dig in? The end goal is to ship the Vagrantfile and not host a Vagrant box. P.S. My initial, feeble attempts made use of the linux yes command, but a better way has to exist!
In your bootstrap.sh just include something like: miniconda=Miniconda3-3.7.4-Linux-x86_64.sh cd /vagrant if [[ ! -f $miniconda ]]; then wget --quiet http://repo.continuum.io/miniconda/$miniconda fi chmod +x $miniconda ./$miniconda -b -p /opt/anaconda cat >> /home/vagrant/.bashrc << END # add for anaconda install PATH=/opt/anaconda/bin:\$PATH END The -b option runs in batch mode and is what you are looking for: >>>> ./Miniconda-3.7.0-Linux-x86_64.sh -h usage: ./Miniconda-3.7.0-Linux-x86_64.sh [options] Installs Miniconda 3.7.0 -b run install in batch mode (without manual intervention), it is expected the license terms are agreed upon -f no error if install prefix already exists -h print this help message and exit -p PREFIX install prefix, defaults to /Users/phil/miniconda I also typically put Miniconda (or a link to it) directly in the "vagrant" where the bootstrap.sh is. That way, you are not downloading from the web during each vagrant up (after init or destroy).
Puppet
25,321,139
12
I am using https://github.com/puphpet/puppetlabs-mysql to set up mysql configuration and I need to change bind-address variable to 0.0.0.0. I am trying to do that as mysql::config::override_options { 'mysqld' : 'bind-address' => '0.0.0.0' } but it doesn't work. Can you help me to advise how this should be done? Thank you in advance!
The answers don't seem to comply to the latest version of the module (> 3.1). You can use: class { '::mysql::server': override_options => { mysqld => { bind-address => '0.0.0.0'} #Allow remote connections }, # ... other class options }
Puppet
20,245,225
12
Hi I'm new to puppet and trying to work on a sample to copy files from one location to another location. Any sample script to do that? Ex: I've my file at d:\temp\test.txt and I want to copy this file to E:\mycopy\ folder.
You can "ensure" that the file at target location exists and provide the file to be copied as source in file type. A partial code snippet only showing relevant parts: file { 'E:\mycopy\folder\filename': ensure => present, source => "d:\temp\test.txt", } Check the documentation of file type here and how source attribute behaves here. Now this will work with a few caveats : If you are using absolute file path as source - then the file should be present on agent machine If you are serving file from Puppet's file server then the source file should be in appropriate location in puppet's file server. But what is your exact purpose? Similar thing can be achieved with content attribute of file type or other attributes
Puppet
19,995,172
12
I'm creating a virtual machine to mimic our production web server so that I can share it with new developers to get them up to speed as quickly as possible. I've been through the Vagrant docs however I do not understand the advantage of using a generic base box and provisioning everything with Puppet versus packaging a custom box with everything already installed and configured. All I can think of is; Advantages of using Puppet vs custom packaged box Easy to keep everyone up to date - Ability to put manifests under version control and share the repo so that other developers can simply pull new updates and re-run puppet i.e. 'vagrant provision'. Environment is documented in the manifests. Ability to use puppet modules defined in production environment to ensure identical environments. Disadvantages of using Puppet vs custom packaged box Takes longer to write the manifests than to simply install and configure a custom packaged box. Building the virtual machine the first time would take longer using puppet than simply downloading a custom packaged box. I feel like I must be missing some important details, can you think of any more?
Advantages: As dependencies may change over time, building a new box from scratch will involve either manually removing packages, or throwing the box away and repeating the installation process by hand all over again. You could obviously automate the installation with a bash or some other type of script, but you'd be making calls to the native OS package manager, meaning it will only run on the operating system of your choice. In other words, you're boxed in ;) As far as I know, Puppet (like Chef) contains a generic and operating system agnostic way to install packages, meaning manifests can be run on different operating systems without modification. Additionally, those same scripts can be used to provision the production machine, meaning that the development machine and production will be practically identical. Disadvantages: Having to learn another DSL, when you may not be planning on ever switching your OS or production environment. You'll have to decide if the advantages are worth the time you'll spend setting it up. Personally, I think that having an abstract and repeatable package management/configuration strategy will save me lots of time in the future, but YMMV.
Puppet
12,873,128
12
I'm doing my first steps in Puppet and ran into a problem. I've installed PHP on a Linux server and I want to do some slightly changes to php.ini file. I don't want to overwrite the whole ini file with one from repository, just change/create one simple config value. I want to ensure, that the property upload_max_filesize in php.ini has the value of 10M. How can I achieve this?
My preferred option would be to leave php.ini alone, and have puppet create a file in php's conf.d directory to override the values you want to change. The less changes you make to php.ini, the easier it is to see what's going on when you need to merge your changes with the package providers changes when you upgrade php.ini in future. file {'/etc/php5/conf.d/upload_limits.conf': ensure => present, owner => root, group => root, mode => 444, content => "post_max_size = 10M \nupload_max_filesize = 10M \n", }
Puppet
10,800,199
12
In a puppet class how should I test if a variable has been set or not? Right now I am just checking if a variable is undefined: if $http_port != undef { $run_command = "$run_command --http-port $http_port" } Is there a better way to check if a variable has been declared or not?
If you are testing if an variable is undef, your way is correct. Writing if $http_port { $run_command = "$run_command --http-port $http_port" } would accomplish almost the same. If $http_port is undef or false, it will not run the command. If you want to test if the var has been defined you should do: if defined('$http_port') { $run_command = "$run_command --http-port $http_port" } See https://docs.puppet.com/puppet/4.10/function.html#defined. If the var is a class variable you could do something like: class your_class ( Optional[Integer[0, 65535]] $http_port = undef, ) { if $http_port { notify { "got here with http_port=${http_port}": } } } It will then only run the notify if the class is declared with http_port set as an integer between 0 and 65535.
Puppet
45,310,419
11
hiera.yaml --- :hierarchy: - node/%{host_fqdn} - site_config/%{host_site_name} - site_config/perf_%{host_performance_class} - site_config/%{host_type}_v%{host_type_version} - site/%{host_site_name} - environments/%{site_environment} - types/%{host_type}_v%{host_type_version} - hosts - sites - users - common # options are native, deep, deeper :merge_behavior: deeper We currently have this hiera config. So the config gets merged in the following sequence common.yaml > users.yaml > sites.yaml > hosts.yaml > types/xxx_vxxx.yaml > etc. For the variable top hierarchies, it gets overwritten only if that file exists. eg: common.yaml server: instance_type: m3.medium site_config/mysite.yaml server: instance_type: m4.large So for all other sites, the instance type will be m3.medium, but only for mysite it will be m4.large. How can I achieve the same in Ansible?
I think that @Xiong is right that you should go the variables way in Ansible. You can set up flexible inventory with vars precedence from general to specific. But you can try this snippet if it helps: --- - hosts: loc-test tasks: - include_vars: hiera/{{ item }} with_items: - common.yml - "node/{{ ansible_fqdn }}/users.yml" - "node/{{ ansible_fqdn }}/sites.yml" - "node/{{ ansible_fqdn }}/types/{{ host_type }}_v{{ host_type_version }}.yml" failed_when: false - debug: var=server This will try to load variables from files with structure similar to your question. Nonexistent files are ignored (because of failed_when: false). Files are loaded in order of this list (from top to bottom), overwriting previous values. Gotchas: all variables that you use in the list must be defined (e.g. host_type in this example can't be defined in common.yml), because list of items to iterate is templated before the whole loop is executed (see update for workaround). Ansible overwrite(replace) dicts by default, I guess your use case expects merging behavior. This can be achieved with hash_behavior setting – but this is unusual for Ansible playbooks. P.S. You may alter top-to-bottom-merge behavior by changing with_items to with_first_found and reverse the list (from specific to general). In this case Ansible will load variables from first file found. Update: use variables from previous includes in file path. You can split the loop into multiple tasks, so Ansible will evaluate each task's result before templating next file's include path. Make hiera_inc.yml: - include_vars: hiera/common.yml failed_when: false - include_vars: hiera/node/{{ ansible_fqdn }}/users.yml failed_when: false - include_vars: hiera/node/{{ ansible_fqdn }}/sites.yml failed_when: false - include_vars: hiera/node/{{ ansible_fqdn }}/types/{{ host_type | default('none') }}_v{{ host_type_version | default('none') }}.yml failed_when: false And in your main playbook: - include: hiera_inc.yml This looks a bit clumsy, but this way you can define host_type in common.yaml and it will be honored in the path templating for next tasks. With Ansible 2.2 it will be possible to include_vars into named variable (not global host space), so you can include_vars into hiera_facts and use combine filter to merge them without altering global hash behavior.
Puppet
39,473,719
11
I'm trying to build a puppet managed infrastructure (non-enterprise) in AWS with EC2 instances. Using puppetlabs-aws module I'm able to create the machines by convenient means. Next up is to make local settings on each node, most importantly setting a unique hostname. How can I do this? One way I know of is to provide a script via the user_data parameter. That would be great, but to be usable I need to be able to parameterize that script in order to avoid duplicating the script once for each agent. Does it make sense? I'd really appreciate a convenient way of achieving this, as I want to launch new instances programmatically. Any suggestion will be considered. Update To give an example of my problem, consider this snippet of my provisioning puppet manifest: ec2_instance { 'backend': ensure => present, name => 'backend', region => 'us-west-2', image_id => 'ami-f0091d91', instance_type => 't2.micro', key_name => 'mykey', security_groups => ['provision-sg'], user_data => template('configure.erb'), } ec2_instance { 'webfront': ensure => present, name => 'webfront', region => 'us-west-2', image_id => 'ami-f0091d91', instance_type => 't2.micro', key_name => 'mykey', security_groups => ['provision-sg'], user_data => template('configure.erb'), } This will ensure the two instances are up and running. Please notice the user_data => template('configure.erb') referring to a template script which is executed on the instance once it is created. Here I would be able to set the hostname (or whatever I wanted to) if I only knew what data to base the decision on. I can add tags to the instance descriptions, but that is not readable from the configure.erb script as far at I know. Anyway, setting the hostname is just my idea of solving the root problem. There might be other more convenient methods. What I want is simply a way of having these two instances representing different node types to the puppet master.
The problem is how to set up a new instance with so that it will load it's config from a particular class Let me try and explain the problem I think you are trying to address What I am trying to answer here You have an existing script that sets up EC2 virtual hosts on AWS using the aws-puppet module. This module calls AWS API to actually make EC2 virtual hosts. But they only contain configuration that is "built in" to the AMI file that is used in the API call. A typical AMI file might be a Centos base image. Further configuration is possible at this phase via a "user data script". But let's assume this a shell script, difficult to test and maintain and so not containing complex setup So further configuration, install of packages and setup is needed. In order to make this setup happen, there is a second phase of activity from puppet, using entirely different manifests (that are not detailed in the question) This second phase is controlled by the new EC2 virtual hosts attaching to the puppet master in their own right. So what I am assuming you are doing is: phase 1, making EC2 hosts phase 2, when they are up config themselves from puppet Basic Answer using roles Here some ideas of how to make this scenario with two phase configuration of the EC2 hosts work At create time make a custom fact "role". Make a file in /etc/facter/facts.d/role.yaml like this role: webserver This can be setup as the instance is made by adding a command like this to a User Data script echo 'role: webserver' > /etc/facter/facts.d/role.yaml As long as this "role" is setup before puppet starts up it will work fine. I am assuming that you have a set of modules with manifests and maybe files subdirectories in the module path with the same name as the role Next, alter your site.pp to say something like include "$role" And the init.pp from the module will kick in and do the right thing, install packages, configure files etc! This idea is explained in more detail here https://puppetlabs.com/presentations/designing-puppet-rolesprofiles-pattern Another Approach The above is a really crude way of doing it which I haven't tested! Our setup has roles but loads them via hiera configuration. The heira configuration looks somewhat like this --- :backends: - yaml :hierarchy: - role/%{::role} - global :yaml: :datadir: /etc/puppet/environments/production/hiera Then I might have a /etc/puppet/environments/production/hiera/role/webserver.yaml file which says classes: - webserver - yum_repos - logstash - java8 And the end of the site.pp says hiera_include('classes') Which loads all the relevant "classes" definitions from the modules_include files This has the advantage that multiple classes can be loaded by each role with much less duplication of code The "global" part of the yaml configuration is intended for classes that are loaded by everything in your environment, for example admin user ssh keys defined type example Here is an example of how you might use a defined type as a wrapper around ec2_instance to pass the "myrole" into the template. I have not tested this, I don't have the aws puppet stuff installed define my_instance( $ensure = present, $region = 'us-west-2', $image_id = 'ami-f0091d91', $instance_type = 't2.micro', $key_name= 'mykey', $security_groups = ['provision-sg'], $myrole = 'webserver' ) { ec2_instance { $title : ensure => $ensure, name => $title, region => $region, image_id => $image_id, instance_type => $instance_type, key_name => $key, security_groups => $security_groups, user_data => template('configure.erb'), } } $instance_data={ 'backend' => { ensure => present, name => 'backend', region => 'us-west-2', image_id => 'ami-f0091d91', instance_type => 't2.micro', key_name => 'mykey', security_groups => ['provision-sg'], myrole => 'voodooswamp' }, 'webfront'=> { ensure => present, region => 'us-west-2', image_id => 'ami-f0091d91', instance_type => 't2.micro', key_name => 'mykey', security_groups => ['provision-sg'], myrole => 'humanfly' } } create_resources(my_instance, $instance_data)
Puppet
34,226,131
11
This is happening in Puppet's bundle. The Gemfile specifies gem "puppet", :path => File.dirname(__FILE__), :require => false But one of the gems I installed in $GEM_HOME appears in $: after all. $ bundle exec ruby -e 'puts $:' ... /home/puppy/puppet-git-clone/lib ... /usr/lib/ruby/vendor_ruby ... /home/puppy/gems/gems/puppet-3.7.5/lib ... This is not a problem in and of itself, but apparently Ruby will load Puppet 3.7.5 instead of the 3.7.3 I checked out of the git repo. $ bundle exec irb irb(main):001:0> require 'puppet' => true irb(main):002:0> Facter.value(:puppetversion) => "3.7.5" Why is Puppet not loaded from the git tree and how can I debug this further? Update Puppets .gemspec might be involved. It's clever about specifying the version. I now worry that Rubygems does in fact load the installed 3.7.5 gem so that Puppet.version would truthfully report a wrong value, throwing off bundler. Could that be what's happening? Update 2 As suggested in the comments, I tried settings the path and version statically in the Gemfile. gem "puppet", "3.4.2", :path => "/home/puppy/puppet-git-clone", :require => false As for the result, well - at least bundler is consistent in its views ;-) Could not find gem 'puppet (= 3.4.2) ruby' in source at /home/ffrank/git/puppet. Source contains 'puppet' at: 3.7.3 Run `bundle install` to install missing gems.
The quick fix is to add -Ilib to your ruby command: $ bundle exec ruby -e "require 'puppet'; puts Facter.value(:puppetversion)" 3.7.5 $ bundle exec ruby -Ilib -e "require 'puppet'; puts Facter.value(:puppetversion)" 3.7.3 If we compare the load paths, you can see that adding -Ilib results in 3.7.5 not being present in the second load path: $ diff <(bundle exec ruby -e 'puts $:') <(bundle exec ruby -Ilib -e 'puts $:') | grep 'puppet-' < /Library/Ruby/Gems/2.0.0/gems/puppet-3.7.5/lib It seems like this should be the default behavior, so there may be a bug in bundler.
Puppet
29,709,146
11
In the official Puppet docs it says that there are two chaining arrows: https://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html -> (ordering arrow) Causes the resource on the left to be applied before the resource on the right. Written with a hyphen and a greater-than sign. ~> (notification arrow) Causes the resource on the left to be applied first, and sends a refresh event to the resource on the right if the left resource changes. Written with a tilde and a greater-than sign. Can someone clarify the difference between these two?
The document you mentioned has given the best explanation. If you try to understand it by simple way, using the exist sample. Package['ntp'] -> File['/etc/ntp.conf'] ~> Service['ntpd'] For File['/etc/ntp.conf'], puppet needs to make sure that the package ntp has been installed before it creates or updates the file ntp.conf. There is no restart request. But for Service['ntpd'], ntp.conf needs to exist first - that's the same order as ->. * But if puppet finds the file ntp.conf has any changes (whether it is created or updated), service ntp needs to be restarted. That's the difference*. For more reading about ordering in puppet, please see these documents: Learning Puppet β€” Resource Ordering And do some testing by yourself to understand how it works. set Package['ntp'], File['/etc/ntp.conf'] ,Service['ntpd'] with the order. run puppet apply to make sure, Package/File/Service are ready on the system. make a change in the file ntp.conf. enable the --debug option with the puppet apply command. The debug log will give you detail in the background - for example, you should see that the file gets updated and the ntpd service gets restarted.
Puppet
27,803,107
11
I'm using puppet as my provisioner in one of my vagrant project. I'm trying to add a module for a custom bash_profile. The module_path for puppet is set to: puppet.module_path = "puppet/modules" The class for my bash_profile module looks like this: class bash_profile { file { "/home/vagrant/bash_profile": ensure => present, source => "puppet:///modules/bash_profile/files/bash_profile" } } Here's the file structure for my puppet structure: puppet | manifests | | phpbase.pp // my main manifest file that has includes for modules | modules | | bash_profile | | | files | | | | bash_profile // the actual bash_profile file I want to ensure is present on my VM | | | manifests | | | | init.pp // the init file included for the bash_profile class When I run the provisioning for vagrant, I get the error err: /Stage[main]/Bash_profile/File[/home/vagrant/bash_profile]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/bash_profile/files/bash_profile at /tmp/vagrant-puppet-1/modules-0/bash_profile/manifests/init.pp:8 I'm not sure why it can't retrieve the information. The path seems to be correct. Can anyone see what I'm missing?
Yes, you are not supposed to include the literal files/ in the URL. Instead, it should just be puppet:///modules/bash_profile/bash_profile
Puppet
24,213,407
11
I have worked out how to fire up a vagrant box and apt-get install packages I now want to git clone a node.js repo from github before running npm install and running the app with node app.js I expected to be able to achieve this by issuing BASH commands, but I see now that puppet requires this to be done in a puppety way. The results from Google on this issue are confusing and inconsistent. So- how do you tell Puppet to git clone a node.js package from github and then install it with npm install?
You can use the puppet-nodejs module to manage npm packages. Take a look at https://forge.puppetlabs.com/puppetlabs/nodejs Otherwise, this article should explain how to clone a git repo. http://livecipher.blogspot.com.au/2013/01/deploy-code-from-git-using-puppet.html More info can be found at https://github.com/puppetlabs/puppetlabs-vcsrepo. Once installed you should be able to do something like: vcsrepo { "/path/to/repo": ensure => present, provider => git, source => 'git://example.com/repo.git', revision => 'master' }
Puppet
18,329,821
11
Puppet 2.7.19 Vagrant version 1.0.6 VM OS Ubuntu 12.04 I am attempting to set the puppet module path from vagrant. Which seems like it should be very simple. In my Vagrant file I have: Vagrant::Config.run do |config| config.vm.provision :puppet, :module_path => "my_modules" config.vm.provision :puppet, :options => ["--modulepath", "my_modules"] end When I change the the value of the modulepath it seems to have no effect (after vagrant reload) Here is a snipplet from vagrant up [default] -- v-root: /vagrant [default] -- manifests: /tmp/vagrant-puppet/manifests [default] -- v-pp-m0: /tmp/vagrant-puppet/modules-0 Notice the /tmp/vagrant-puppet/modules-0 ? What is this about? Then from inside vagrant: vagrant@precise64:~$ puppet apply --configprint modulepath /home/vagrant/.puppet/modules:/usr/share/puppet/modules So when I do: puppet module install puppetlabs/mysql I get this error: Preparing to install into /home/vagrant/.puppet/modules ... Error: Could not install module 'puppetlabs-mysql' (latest) Directory /home/vagrant/.puppet/modules does not exist So I have to: vagrant@precise64:~/.puppet$ mkdir /home/vagrant/.puppet/modules vagrant@precise64:~/.puppet$ puppet module install puppetlabs/mysql Preparing to install into /home/vagrant/.puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /home/vagrant/.puppet/modules └─┬ puppetlabs-mysql (v0.6.1) └── puppetlabs-stdlib (v3.2.0) And then I have to move the modules into place where vagrant can see them... mv /home/vagrant/.puppet/modules/mysql /tmp/vagrant-puppet/modules-0 Seems like maybe this is a bug or I am really missing something. Seems pretty basic so I would like to hear how others solved this. Thanks!
You're effectively specifying module_path twice: Vagrant::Config.run do |config| config.vm.provision :puppet, :module_path => "my_modules" config.vm.provision :puppet, :options => ["--modulepath", "my_modules"] end I'm not sure which would wind up overriding the other, but you shouldn't be specifying the module path in both ways. I think it's better to use vagrant's support for module_path in preference to the :options array, as in your first line. I like the following style better still: Vagrant::Config.run do |config| ... config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = ["modules-contrib","modules-custom"] puppet.manifest_file = "site.pp" end # puppet end # config You asked about /tmp/vagrant-puppet/modules-0. That's the first item in the modulepath array, where the 0 is the array index. Ie in my example above, the modules-contrib and modules-custom directories from my vagrant project get mounted at /tmp/vagrant-puppet/modules-0 and /tmp/vagrant-puppet/modules-1 respectively. You shouldn't be installing puppet modules within the vagrant box. Instead, install them in a modules directory in your vagrant project in the host environment. Rather than installing them one by one, I'd recommend using librarian puppet (gem install librarian-puppet), and putting a Puppetfile in your vagrant project, which lists all the third party modules you want, and tells librarian-puppet to put them in a separate modules directory from the one you use for your custom puppet modules. I use the modules-contrib directory for third party modules, and put my own in modules-custom. Tell the librarian where to put its modules: librarian-puppet config --local path modules-contrib See https://github.com/rodjek/librarian-puppet for the layout of the Puppetfile. It's pretty simple, and lets you mix up puppet-forge and git sources as you like. You should add the modules-contrib folder to your .gitignore file (assuming you use git), and rely on version control of the Puppetfile file.
Puppet
15,507,506
11
I'm trying to work out the best way to set some environment variables with puppet. I could use exec and just do export VAR=blah. However, that would only last for the current session. I also thought about just adding it onto the end of a file such as bashrc. However then I don't think there is a reliable method to check if it is all ready there; so it would end up getting added with every run of puppet.
I would take a look at this related question. *.sh scripts in /etc/profile.d are read at user-login time (as the post says, at the same time /etc/profile is sourced) Variables export-ed in any script placed in /etc/profile.d will therefore be available to your users. You can then use a file resource to ensure this action is idempotent. For example: file { "/etc/profile.d/my_test.sh": content => 'export MYVAR="123"' }
Puppet
15,440,972
11
I would like to run a webservice and wait for a few seconds after to get the result. What is the best way to achieve a wait in puppet ?
You could use the linux sleep command with exec and stage it to run after the web-service. something like : exec { 'wait_for_my_web_service' : require => Service["my_web_service"], command => "sleep 10 && /run/my/command/to/get/results/from/the/web/service", path => "/usr/bin:/bin", }
Puppet
14,470,724
11
Directory and file layout as follows: app_test/ app_test/manifests app_test/manifests/init.pp app_test/manifests/test.pp Contents of init.pp: class app_test { include app_test::test } Contents of test.pp: class app_test::test { exec { 'hello world': command => "/bin/echo Hello World >> /tmp/are-you-there.txt" } } Puppet v2.7.11 is installed. $ puppet apply init.pp notice: Finished catalog run in 0.01 seconds Could someone please indicate why this doesn't generate the file /tmp/are-you-there-txt?
You are only defining classes, not declaring them. Create a file modules/[module_name]/tests/init.pp: Contents: include app_test Test your class then with: puppet apply tests/init.pp That should do the trick! Kind regards, Ger Apeldoorn
Puppet
13,143,929
11
I have a pip style requirements.txt file I use for keeping track of my python dependencies, I'm moving my dev environment over to vagrant + puppet. So far I've been using the pip provider built into puppet to install individual packages like this: package { ["django", "nose"]: ensure => present, provider => pip } Is it possible to pass in my requirements.txt instead and have puppet keep the packages up to date whenever that file changes?
Yes it is possible. Instead of defining a package resource, define a "exec" resource instead that will take the requirements.txt as variable and runs the pip install command. E.g. class pip_install( $path_requirements_file, ){ exec { "pip_requirements_install": command => "pip install -r ${path_requirements_file}", refreshonly => true, } }
Puppet
21,420,205
10
I would like to know if there is any way of checking if a string exists inside another string (ie contains function). I have been taken a look to http://forge.puppetlabs.com/puppetlabs/stdlib but I haven't found this specific function. Maybe this is possible through a regexp, but I am not really sure how to do it. Can anybody help me this one?
There is an "in" operator in Puppet: # Right operand is a string: 'eat' in 'eaten' # resolves to true 'Eat' in 'eaten' # resolves to true # Right operand is an array: 'eat' in ['eat', 'ate', 'eating'] # resolves to true 'Eat' in ['eat', 'ate', 'eating'] # resolves to true # Right operand is a hash: 'eat' in { 'eat' => 'present tense', 'ate' => 'past tense'} # resolves to true 'eat' in { 'present' => 'eat', 'past' => 'ate' } # resolves to false # Left operand is a regular expression (with the case-insensitive option "?i") /(?i:EAT)/ in ['eat', 'ate', 'eating'] # resolves to true # Left operand is a data type (matching integers between 100-199) Integer[100, 199] in [1, 2, 125] # resolves to true Integer[100, 199] in [1, 2, 25] # resolves to false
Puppet
19,283,190
10
I'm trying to get TeamCity to trigger a deployment with puppet via the commandline using puppet.bat on Windows. In Teamcity I'm calling this using a Command Line runner, with Command executable: C:\Program Files (x86)\Puppet Labs\Puppet\bin\puppet.bat Command parameters: apply myexample.pp What I would like to do is also pass the build number from TeamCity as well so I can use this within myexample.pp Is this possible? UPDATE: Code used for Custom Fact which was the accepted answer below. require 'open-uri' $uri = URI.parse("http://teamcity/guestAuth/app/rest/buildTypes/id: <BUILDID>/builds/status:SUCCESS/number") $version = $uri.read Facter.add("latestbuildversion") do setcode do $version end end
To pass a value through the command line it needs to be an environment variable, prefixed by FACTER_. So, FACTER_foo will turn into $::foo.
Puppet
15,901,850
10
From my current knowledge, there is no reason .terraform.lock.hcl should be included in the .gitignore. Nothing about this file is private, or is there?
Per the Terraform documentation on the Dependency Lock File: Terraform automatically creates or updates the dependency lock file each time you run the terraform init command. You should include this file in your version control repository so that you can discuss potential changes to your external dependencies via code review, just as you would discuss potential changes to your configuration itself. The key to understanding why you should commit that file is found in the following section on Dependency Installation Behavior: When terraform init is working on installing all of the providers needed for a configuration, Terraform considers both the version constraints in the configuration and the version selections recorded in the lock file. If a particular provider has no existing recorded selection, Terraform will select the newest available version that matches the given version constraint, and then update the lock file to include that selection. If a particular provider already has a selection recorded in the lock file, Terraform will always re-select that version for installation, even if a newer version has become available. You can override that behavior by adding the -upgrade option when you run terraform init, in which case Terraform will disregard the existing selections and once again select the newest available version matching the version constraint. Essentially this is intended to have Terraform continue to use the version of the provider selected when you added it. If you do not checkin the lock file, you will always be automatically upgraded to the latest version that obeys the constraint in code, which could lead to unintended consequences. Note: You can force Terraform to upgrade when doing the init call by passing the -upgrade flag. terraform init -upgrade Update for Cross-Platform Development From the Terraform documentation on the providers lock command: Specifying Target Platforms In your environment you may, for example, have both developers who work with your Terraform configuration on their Windows or macOS workstations and automated systems that apply the configuration while running on Linux. In that situation, you could choose to verify that all of your providers support all of those platforms, and to pre-populate the lock file with the necessary checksums, by running terraform providers lock and specifying those three platforms: terraform providers lock \ -platform=windows_amd64 \ -platform=darwin_amd64 \ -platform=linux_amd64 \ -platform=darwin_arm64 \ -platform=linux_arm64 The above example uses Unix-style shell wrapping syntax for readability. If you are running the command on Windows then you will need to replace the backslashes with carets (for cmd) or backticks (for PowerShell). So you should still check the lock file into version control, but you should ensure the lock file contains the checksums for providers on all platforms.
Terraform
67,963,719
212
I have resources defined in .tf files that are generic to several applications. I populate many of the fields via a .tfvars file. I need to omit some of the resources entirely based on variables in the .tfvars. For example if I have a resource like: resource "cloudflare_record" "record" { zone_id = "${data.cloudflare_zones.domain.zones[0].id}" name = "${var.subdomain}" value = "${var.origin_server}" type = "CNAME" ttl = 1 proxied = true } But then I declare something like cloudflare = false in my .tfvars file I'd like to be able to do something like this: if var.cloudflare { resource "cloudflare_record" "record" { zone_id = "${data.cloudflare_zones.domain.zones[0].id}" name = "${var.subdomain}" value = "${var.origin_server}" type = "CNAME" ttl = 1 proxied = true } } I've looked at dynamic blocks but that looks like you can only use those to edit fields and blocks within a resource. I need to be able to ignore an entire resource.
Add a count parameter with a ternary conditional using the variable declared in .tfvars like this: resource "cloudflare_record" "record" { count = var.cloudflare ? 1 : 0 zone_id = "${data.cloudflare_zones.domain.zones[0].id}" name = "${var.subdomain}" value = "${var.origin_server}" type = "CNAME" ttl = 1 proxied = true } In this example var.cloudflare is a boolean declared in the .tfvars file. If it is true a count of 1 record will be created. If it is false a count of 0 record will be created. After the count apply the resource becomes a group, so later in the reference use 0-index of the group: cloudflare_record.record[0].some_field
Terraform
60,231,309
205
When I input any code in this function (e.g. console.log();) and click "Save", an error occurs: The provided execution role does not have permissions to call DescribeNetworkInterfaces on EC2 exports.handler = (event, context, callback) => { callback(null, 'Hello from Lambda'); console.log(); // here is my code }; I bound the function with Role: lambda_excute_execution(Policy:AmazonElasticTranscoderFullAccess). And this function is not bound with any triggers now. And then, I give the role AdministratorAccess Policy, I can save my source code correctly. This role could run Functions successfully before today. Does anyone know this error?
This error is common if you try to deploy a Lambda in a VPC without giving it the required network interface related permissions ec2:DescribeNetworkInterfaces, ec2:CreateNetworkInterface, and ec2:DeleteNetworkInterface (see AWS Forum). For example, this a policy that allows to deploy a Lambda into a VPC: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeNetworkInterfaces", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeInstances", "ec2:AttachNetworkInterface" ], "Resource": "*" } ] }
Terraform
41,177,965
194
I got the following error during a terraform plan which occured in my pipeline: Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed Lock Info: ID: 9db590f1-b6fe-c5f2-2678-8804f089deba Path: ... Operation: OperationTypePlan Who: ... Version: 0.12.25 Created: 2020-05-29 12:52:25.690864752 +0000 UTC Info: Terraform acquires a state lock to protect the state from being written by multiple users at the same time. Please resolve the issue above and try again. For most commands, you can disable locking with the "-lock=false" flag, but this is not recommended. It is weird because I'm sure there is no other concurrent plan. Is there a way to deal with this? How should I remove this lock?
Cause of Error This error usually appears when one process fails running terraform plan or terraform apply. For example if your network connection interrupts or the process is terminated before finishing. Then Terraform "thinks" that this process is still working on the infrastructure and blocks other processes from working with the same infrastructure and state at the same time in order to avoid conflicts. As stated in the error message, you should make sure that there is really no other process still running (e.g. from another developer or from some build-automation). If you force-unlock in such a situation you might screw up your terraform state, making it hard to recover. Resolution If there is no other process still running: run this command terraform force-unlock 9db590f1-b6fe-c5f2-2678-8804f089deba (where the numerical id should be replace by the one mentioned in the error message) if you are not sure if there is another process running and you are worried that you might make things worse, I would recommend waiting for some time (like 1h), try again, then try again after maybe 30 min. If the error still persists it is likely that there really is no other process and it's safe to unlock as described above
Terraform
62,189,825
194
It takes a long time to run terraform and wait. So I would like to run it to exclude rds that takes the longest time to excute or I would like to run only ec2 resource. Is there a way to do such things in terraform?
You can use -target=resource like this: terraform plan -target=module.mymodule.aws_instance.myinstance terraform apply -target=module.mymodule.aws_instance.myinstance or terraform plan -target=aws_instance.myinstance terraform apply -target=aws_instance.myinstance Disclaimer: Before downvoting the answer, please note that they actually asked to either "exclude" or "run only ec2 resource". And after all this time the exclude feature request is still open in the terraform repo.
Terraform
46,762,047
170
I need to deploy a list of GCP compute instances. How do I loop for_each through the "vms" in a list of objects like this: "gcp_zone": "us-central1-a", "image_name": "centos-cloud/centos-7", "vms": [ { "hostname": "test1-srfe", "cpu": 1, "ram": 4, "hdd": 15, "log_drive": 300, "template": "Template-New", "service_types": [ "sql", "db01", "db02" ] }, { "hostname": "test1-second", "cpu": 1, "ram": 4, "hdd": 15, "template": "APPs-Template", "service_types": [ "configs" ] } ] }
I work a lot with iterators in Terraform, they always gave me bad headaches. Therefore I identified five of the most common iterator patterns (code examples are given below), which helped me construct a lot of nice modules (source). Using for_each on a list of strings Using for_each on a list of objects Using for_each to combine two lists Using for_each in a nested block Using for_each as a conditional Using for_each and a list of strings is the easiest to understand, you can always use the toset() function. When working with a list of objects you need to convert it to a map where the key is a unique value. The alternative is to put a map inside your Terraform configuration. Personally, I think it looks cleaner to have a list of objects instead of a map in your configuration. The key usually doesn't have a purpose other than to identify unique items in a map, which can thus be constructed dynamically. I also use iterators to conditionally deploy a resource or resource block, especially when constructing more complex modules. 1. Using for_each on a list of strings locals { ip_addresses = ["10.0.0.1", "10.0.0.2"] } resource "example" "example" { for_each = toset(local.ip_addresses) ip_address = each.key } 2. Using for_each on a list of objects locals { virtual_machines = [ { ip_address = "10.0.0.1" name = "vm-1" }, { ip_address = "10.0.0.1" name = "vm-2" } ] } resource "example" "example" { for_each = { for index, vm in local.virtual_machines: vm.name => vm # Perfect, since VM names also need to be unique # OR: index => vm (unique but not perfect, since index will change frequently) # OR: uuid() => vm (do NOT do this! gets recreated everytime) } name = each.value.name ip_address = each.value.ip_address } 3. Using for_each to make the Cartesian product of two lists locals { domains = [ "https://example.com", "https://stackoverflow.com" ] paths = [ "/one", "/two", "/three" ] } resource "example" "example" { # Loop over both lists and flatten the result urls = flatten([ for domain in local.domains : [ for path in local.paths : { domain = domain path = path } ] ])) } 4. Using for_each on a nested block # Using the optional() keyword makes fields null if not present variable "routes" { type = list( name = string path = string config = optional(object({ cache_enabled = bool https_only = bool })) default = [] } resource "example" "example" { name = ... dynamic "route" { for_each = { for route in var.routes : route.name => route } content { # Note: <top_level_block>.value.<object_key> name = route.value.name } dynamic "configuration" { # Note: <top_level_block>.value.<optional_object_key> for_each = route.value.config != null ? [1] : [] content { cache_enabled = route.value.config.cache_enabled https_only = route.value.config.https_only } } } 5. Using for_each as a conditional (particularly for dynamic blocks) variable "deploy_example" { type = bool description = "Indicates whether to deploy something." default = true } # Using count and a conditional, for_each is also possible here. # See the next solution using a for_each with a conditional. resource "example" "example" { count = var.deploy_example ? 0 : 1 name = ... ip_address = ... } variable "enable_logs" { type = bool description = "Indicates whether to enable something." default = false } resource "example" "example" { name = ... ip_address = ... # Note: dynamic blocks cannot use count! # Using for_each with an empty list and list(1) as a readable alternative. dynamic "logs" { for_each = var.enable_logs ? [] : [1] content { name = "logging" } } }
Terraform
58,594,506
166
Use case I have installed Terraform v0.11.13 via homebrew and as recommended by terraform I want to ugprade to version v0.11.14 before doing the major upgrade to v0.12.0. The problem When I run brew upgrade terraform or download the Mac package from the terraform website it would immediately update my terraform version to v0.12.0 I think. So how can I upgrade to v0.11.14 instead?
Especially when playing around with Terraform 0.12 betas, I learned to love tfenv. After installation (via brew install tfenv on MacOS), this allows you to easily discover, install and activate any Terraform version: $ tfenv list-remote 0.12.0 0.12.0-rc1 0.12.0-beta2 0.12.0-beta1 0.12.0 0.11.14 ... $ tfenv install 0.11.14 [INFO] Installing Terraform v0.11.14 [INFO] Downloading release tarball from https://releases.hashicorp.com/terraform/0.11.14/terraform_0.11.14_darwin_amd64.zip ... [INFO] Installation of terraform v0.11.14 successful [INFO] Switching to v0.11.14 [INFO] Switching completed If you want to manually switch to a different version: $ tfenv use 0.12.0 [INFO] Switching to v0.12.0 [INFO] Switching completed Alternatively, adding .terraform-version file makes tfenv automatically switch to the right version for a given directory and it will even take care of auto-installing the correct version if not already installed.
Terraform
56,283,424
164
I want to attach one of the pre-existing AWS managed roles to a policy, here's my current code: resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" { role = "${aws_iam_role.sto-test-role.name}" policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess" } Is there a better way to model the managed policy and then reference it instead of hardcoding the ARN? It just seems like whenever I hardcode ARNs / paths or other stuff like this, I usually find out later there was a better way. Is there something already existing in Terraform that models managed policies? Or is hardcoding the ARN the "right" way to do it?
The IAM Policy data source is great for this. A data resource is used to describe data or resources that are not actively managed by Terraform, but are referenced by Terraform. For your example, you would create a data resource for the managed policy as follows: data "aws_iam_policy" "ReadOnlyAccess" { arn = "arn:aws:iam::aws:policy/ReadOnlyAccess" } The name of the data source, ReadOnlyAccess in this case, is entirely up to you. For managed policies I use the same name as the policy name for the sake of consistency, but you could just as easily name it readonly if that suits you. You would then attach the IAM policy to your role as follows: resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" { role = "${aws_iam_role.sto-test-role.name}" policy_arn = "${data.aws_iam_policy.ReadOnlyAccess.arn}" }
Terraform
45,002,292
149
I need to spin up a bunch of EC2 boxes for different users. Each user should be sandboxed from all the others, so each EC2 box needs its own SSH key. What's the best way to accomplish this in Terraform? Almost all of the instructions I've found want me to manually create an SSH key and paste it into a terraform script. (Bad) Examples: https://github.com/hashicorp/terraform/issues/1243, http://2ninjas1blog.com/terraform-assigning-an-aws-key-pair-to-your-ec2-instance-resource/ Terraform fails to import key pair with Amazon EC2) Since I need to programmatically generate unique keys for many users, this is impractical. This doesn't seem like a difficult use case, but I can't find docs on it anywhere. In a pinch, I could generate Terraform scripts and inject SSH keys on the fly using Bash. But that seems like exactly the kind of thing that Terraform is supposed to do in the first place.
Terraform can generate SSL/SSH private keys using the tls_private_key resource. So if you wanted to generate SSH keys on the fly you could do something like this: variable "key_name" {} resource "tls_private_key" "example" { algorithm = "RSA" rsa_bits = 4096 } resource "aws_key_pair" "generated_key" { key_name = var.key_name public_key = tls_private_key.example.public_key_openssh } data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical } resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" key_name = aws_key_pair.generated_key.key_name tags { Name = "HelloWorld" } } output "private_key" { value = tls_private_key.example.private_key_pem sensitive = true } This will create an SSH key pair that lives in the Terraform state (it is not written to disk in files other than what might be done for the Terraform state itself when not using remote state), creates an AWS key pair based on the public key and then creates an Ubuntu 20.04 instance where the ubuntu user is accessible with the private key that was generated. You would then have to extract the private key from the state file and provide that to the users. You could use an output to spit this straight out to stdout when Terraform is applied. Getting the output from private key is via this command below: terraform output -raw private_key Security caveats I should point out here that passing private keys around is generally a bad idea and you'd be much better having developers create their own key pairs and provide you with the public key that you (or them) can use to generate an AWS key pair (potentially using the aws_key_pair resource as used in the above example) that can then be specified when creating instances. In general I would only use something like the above way of generating SSH keys for very temporary dev environments that you are controlling so you don't need to pass private keys to anyone. If you do need to pass private keys to people you will need to make sure that you do this in a secure channel and that you make sure the Terraform state (which contains the private key in plain text) is also secured appropriately.
Terraform
49,743,220
143
I am a little bit puzzled on the question whether to commit .tfstate files to Git or not. The Terraform documentation states: Terraform also put some state into the terraform.tfstate file by default. This state file is extremely important; it maps various resource metadata to actual resource IDs so that Terraform knows what it is managing. This file must be saved and distributed to anyone who might run Terraform. We recommend simply putting it into version control, since it generally isn't too large. Now, on the other hand, the accepted and upvoted answer on Best practices when using Terraform states: Terraform config can be used to provision many boxes on different infrastructure, each of which could have a different state. As it can also be run by multiple people this state should be in a centralised location (like S3) but not git. (Emphasis by the original author, not by me) Who is right, and if so, why?
There are a few reasons not to store your .tfstate files in Git: You are likely to forget to commit and push your changes after running terraform apply, so your teammates will have out-of-date .tfstate files. Also, without any locking on these state files, if two team members run Terraform at the same time on the same .tfstate files, you may overwrite each other's changes. You can solve both problems by both a) storing .tfstate files in an S3 bucket using Terraform remote state, which will push/pull the .tfstate files automatically every time you run terraform apply and b) using a tool like terragrunt to provide locking for your .tfstate files. The .tfstate files may contain secrets. For example, if you use the aws_db_instance resource, you have to specify a database password, and Terraform will store that, in plaintext, in the .tfstate file. This is a bad practice on Terraform's behalf to begin with and storing unencrypted secrets in version control only makes it worse. At least if you store .tfstate files in S3, you can enable encryption at rest (SSL provides encryption while in motion) and configure IAM policies to limit who has access. It's very far from ideal and we'll have to see if the see open issue discussing this problem about it ever gets fixed. For more info, check out How to manage Terraform state and Terraform: Up & Running, both of which I wrote.
Terraform
38,486,335
142
I must be being incredibly stupid but I can't figure out how to do simple string concatenation in Terraform. I have the following data null_data_source: data "null_data_source" "api_gw_url" { inputs = { main_api_gw = "app.api.${var.env_name == "prod" ? "" : var.env_name}mydomain.com" } } So when env_name="prod" I want the output app.api.mydomain.com and for anything else - let's say env_name="staging" I want app.api.staging.mydomain.com. But the above will output app.api.stagingmydomain.com <-- notice the missing dot after staging. I tried concating the "." if the env_name was anything but "prod" but Terraform errors: data "null_data_source" "api_gw_url" { inputs = { main_api_gw = "app.api.${var.env_name == "prod" ? "" : var.env_name + "."}mydomain.com" } } The error is __builtin_StringToInt: strconv.ParseInt: parsing "" The concat() function in TF appears to be for lists not strings. So as the title says: How do you do simple string concatenation in Terraform? I can't believe I'm asking how to concat 2 strings together XD Update: For anyone that has a similar issue I did this horrific workaround for the time being: main_api_gw = "app.api.${var.env_name == "prod" ? "" : var.env_name}${var.env_name == "prod" ? "" : "."}mydomain.com"
I know this was already answered, but I wanted to share my favorite: format("%s/%s",var.string,"string2") Real world example: locals { documents_path = "${var.documents_path == "" ? format("%s/%s",path.module,"documents") : var.documents_path}" } More info: https://www.terraform.io/docs/configuration/functions/format.html
Terraform
55,312,783
141
I'm in the process of swapping over our infrastructure into terraform. What's the best practice for actually managing the terraform files and state? I realize it's infrastructure as code, and i'll commit my .tf files into git, but do I commit tfstate as well? Should that reside somewhere like S3 ? I would like eventually for CI to manage all of this, but that's far stretched and requires me to figure out the moving pieces for the files. I'm really just looking to see how people out there actually utilize this type of stuff in production
I am also in a state of migrating existing AWS infrastructure to Terraform so shall aim to update the answer as I develop. I have been relying heavily on the official Terraform examples and multiple trial and error to flesh out areas that I have been uncertain in. .tfstate files Terraform config can be used to provision many boxes on different infrastructure, each of which could have a different state. As it can also be run by multiple people this state should be in a centralised location (like S3) but not git. This can be confirmed looking at the Terraform .gitignore. Developer control Our aim is to provide more control of the infrastructure to developers whilst maintaining a full audit (git log) and the ability to sanity check changes (pull requests). With that in mind the new infrastructure workflow I am aiming towards is: Base foundation of common AMI's that include reusable modules e.g. puppet. Core infrastructure provisioned by DevOps using Terraform. Developers change Terraform configuration in Git as needed (number of instances; new VPC; addition of region/availability zone etc). Git configuration pushed and a pull request submitted to be sanity checked by a member of DevOps squad. If approved, calls webhook to CI to build and deploy (unsure how to partition multiple environments at this time) Edit 1 - Update on current state Since starting this answer I have written a lot of TF code and feel more comfortable in our state of affairs. We have hit bugs and restrictions along the way but I accept this is a characteristic of using new, rapidly changing software. Layout We have a complicated AWS infrastructure with multiple VPC's each with multiple subnets. Key to easily managing this was to define a flexible taxonomy that encompasses region, environment, service and owner which we can use to organise our infrastructure code (both terraform and puppet). Modules Next step was to create a single git repository to store our terraform modules. Our top level dir structure for the modules looks like this: tree -L 1 . Result: β”œβ”€β”€ README.md β”œβ”€β”€ aws-asg β”œβ”€β”€ aws-ec2 β”œβ”€β”€ aws-elb β”œβ”€β”€ aws-rds β”œβ”€β”€ aws-sg β”œβ”€β”€ aws-vpc └── templates Each one sets some sane defaults but exposes them as variables that can be overwritten by our "glue". Glue We have a second repository with our glue that makes use of the modules mentioned above. It is laid out in line with our taxonomy document: . β”œβ”€β”€ README.md β”œβ”€β”€ clientA β”‚Β Β  β”œβ”€β”€ eu-west-1 β”‚Β Β  β”‚Β Β  └── dev β”‚Β Β  └── us-east-1 β”‚Β Β  └── dev β”œβ”€β”€ clientB β”‚Β Β  β”œβ”€β”€ eu-west-1 β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dev β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ec2-keys.tf β”‚Β Β  β”‚Β Β  β”œβ”€β”€ prod β”‚Β Β  β”‚Β Β  └── terraform.tfstate β”‚Β Β  β”œβ”€β”€ iam.tf β”‚Β Β  β”œβ”€β”€ terraform.tfstate β”‚Β Β  └── terraform.tfstate.backup └── clientC β”œβ”€β”€ eu-west-1 β”‚Β Β  β”œβ”€β”€ aws.tf β”‚Β Β  β”œβ”€β”€ dev β”‚Β Β  β”œβ”€β”€ iam-roles.tf β”‚Β Β  β”œβ”€β”€ ec2-keys.tf β”‚Β Β  β”œβ”€β”€ prod β”‚Β Β  β”œβ”€β”€ stg β”‚Β Β  └── terraform.tfstate └── iam.tf Inside the client level we have AWS account specific .tf files that provision global resources (like IAM roles); next is region level with EC2 SSH public keys; Finally in our environment (dev, stg, prod etc) are our VPC setups, instance creation and peering connections etc. are stored. Side Note: As you can see I'm going against my own advice above keeping terraform.tfstate in git. This is a temporary measure until I move to S3 but suits me as I'm currently the only developer. Next Steps This is still a manual process and not in Jenkins yet but we're porting a rather large, complicated infrastructure and so far so good. Like I said, few bugs but going well! Edit 2 - Changes It's been almost a year since I wrote this initial answer and the state of both Terraform and myself have changed significantly. I am now at a new position using Terraform to manage an Azure cluster and Terraform is now v0.10.7. State People have repeatedly told me state should not go in Git - and they are correct. We used this as an interim measure with a two person team that relied on developer communication and discipline. With a larger, distributed team we are now fully leveraging remote state in S3 with locking provided by DynamoDB. Ideally this will be migrated to consul now it is v1.0 to cut cross cloud providers. Modules Previously we created and used internal modules. This is still the case but with the advent and growth of the Terraform registry we try to use these as at least a base. File structure The new position has a much simpler taxonomy with only two infx environments - dev and prod. Each has their own variables and outputs, reusing our modules created above. The remote_state provider also helps in sharing outputs of created resources between environments. Our scenario is subdomains in different Azure resource groups to a globally managed TLD. β”œβ”€β”€ main.tf β”œβ”€β”€ dev β”‚Β Β  β”œβ”€β”€ main.tf β”‚Β Β  β”œβ”€β”€ output.tf β”‚Β Β  └── variables.tf └── prod β”œβ”€β”€ main.tf β”œβ”€β”€ output.tf └── variables.tf Planning Again with extra challenges of a distributed team, we now always save our output of the terraform plan command. We can inspect and know what will be run without the risk of some changes between the plan and apply stage (although locking helps with this). Remember to delete this plan file as it could potentially contain plain text "secret" variables. Overall we are very happy with Terraform and continue to learn and improve with the new features added.
Terraform
33,157,516
135
I would like to run an AWS lambda function every five minutes. In the AWS Management Console this is easy to set up, under the lambda function's "Event Sources" tab, but how do I set it up with Terraform? I tried to use an aws_lambda_event_source_mapping resource, but it turns out that the API it uses only supports events from Kinesis and DynamoDB. When I try to use it with a scheduled event source, creation times out.
You can use an aws_cloudwatch_event_target resource to tie the scheduled event source (event rule) to your lambda function. You need to grant it permission to invoke your lambda function; you can use an aws_lambda_permission resource for this. Example: resource "aws_lambda_function" "check_foo" { filename = "check_foo.zip" function_name = "checkFoo" role = "arn:aws:iam::424242:role/something" handler = "index.handler" } resource "aws_cloudwatch_event_rule" "every_five_minutes" { name = "every-five-minutes" description = "Fires every five minutes" schedule_expression = "rate(5 minutes)" } resource "aws_cloudwatch_event_target" "check_foo_every_five_minutes" { rule = aws_cloudwatch_event_rule.every_five_minutes.name target_id = "check_foo" arn = aws_lambda_function.check_foo.arn } resource "aws_lambda_permission" "allow_cloudwatch_to_call_check_foo" { statement_id = "AllowExecutionFromCloudWatch" action = "lambda:InvokeFunction" function_name = aws_lambda_function.check_foo.function_name principal = "events.amazonaws.com" source_arn = aws_cloudwatch_event_rule.every_five_minutes.arn }
Terraform
35,895,315
133
The Terraform Data Sources documentation tells me what a data source is, but I do not quite understand it. Can somebody give me a use case of data source? What is the difference between it and configuring something using variables?
Data sources can be used for a number of reasons; but their goal is to do something and then give you data. Let's take the example from their documentation: # Find the latest available AMI that is tagged with Component = web data "aws_ami" "web" { filter { name = "state" values = ["available"] } filter { name = "tag:Component" values = ["web"] } most_recent = true } This uses the aws_ami data source - this is different than a resource! It will instead just give you information, and not create anything. This example in particular will call out to the describe-images AWS API call, pass in a few --filter options as specified, and return an object that you can get information from - take a look at these attributes! name owner_id description image_id ... The list goes on. This is really useful if I were, let's say - always wanting to pull the latest AMI matching some tags, and keep a launch configuration up to date with it. I could use this data provider rather than always have to update a variable or hard-code the ID. Data source can be used for other reasons as well; one of my favorites is the template provider. Good luck!
Terraform
47,721,602
106
I am using terraform to manage IaC on AWS. There was a s3 bucket created by my terraform project and later I moved the s3 bucket terraform to a different project. So I deleted all s3 related code in my project. When I run terraform apply I get an error saying Error: error deleting S3 Bucket (xxxx): BucketNotEmpty: The bucket you tried to delete is not empty I understand that terraform tries to delete the bucket since I removed the code from there. I tried to use terraform refresh but got forbidden error: Error: Forbidden: Forbidden status code: 403, request id: 8351F9C3663AF8FB, host id:. I know I can delete the terraform state file from local but this requires me to import all resources. How can I solve this issue? I am using local state not remote state.
You can remove any resource added to your Terraform like this: List all state: terraform state list Remove desired resource from state: terraform state rm <name>
Terraform
61,297,480
101
I have a Terraform script using modules. I want to create multiple resources so I'm using the for_each method. Below is my variable configuration: variable bridge_domains { description = "Bridge Domain" type = map default = { bd1 = { name = "BD1", }, bd2 = { name = "BD2" } } } In the root main.tf file, I'm looping over that variable using for_each: module "schema_template_bd" { source = "./modules/schema_template_bd" for_each = var.bridge_domains schema = module.tenant.mso_schema.id template = var.template bd = each.value.name } Then in the modules/schema_template_bd file I have the following: resource "mso_schema_template_bd" "bd" { schema_id = var.schema template_name = var.template name = var.bd } The module has an output where I have defined the following: output "mso_bd" { value = mso_schema_template_bd.bd[*] } The idea is to output the names from all the objects that were created. So I have defined an output.tf file (at root level) containing the following code: output "bd_name" { value = module.schema_template_bd.mso_bd.*.name } I always get: This object does not have an attribute named "name". Normally the bd object has a name so the error has to do with a wrong syntax in my view.
The [*] and .* operators are intended for use with lists only. Because this resource uses for_each rather than count, its value in other expressions is a map, not a list. To make your configuration work you'll need to decide whether it's better to return a map of names where the keys are the var.bridge_domains keys, or to return just a set of ids where the caller then couldn't determine which name belongs to which of the elements of var.bridge_domains: output "bd_name" { value = { for k, bd in mso_schema_template_bd.bd : k => bd.name } } OR output "bd_name" { value = [ for bd in mso_schema_template_bd.bd : bd.name ] } If only unique results are desirable in the second example, the function toset can be used: output "bd_name" { value = toset([ for bd in mso_schema_template_bd.bd : bd.name ]) } This uses for expressions, which are the more general counterpart of splat expressions that work with collections of any type and which can produce both sequences and mappings as their result, whereas splat expressions work only with lists.
Terraform
64,989,080
91
I have a GCP infrastructure deployed through Terraform: buckets, service accounts, Compute Engines, VPC, cloud SQL, BigTable, BigQuery, Composer, etc. Terraform v0.11.10 Provider "google" (2.15.0) Recently the client asked me to split our only one terraform file (e.g. main.tf) in several files. E.g: One files for Buckets, other for Service Accounts, other for Database Services, etc. I only have one terraform state file located in a GCP bucket. How could I do it with the lowest impact ? What about with the terraform state ? (Will there also be also multiples state files ? Or Is the idea to keep only one TF file, even if we split the code out ?) NOTE: This has nothing to do with Terraform modules, it's just about dividing a single terraform file (.tf) into several files (.tf)
Terraform does not ascribe any special meaning to which filenames you use and how many files you have. Terraform instead reads all of the .tf files and considers their contents together. Therefore you can freely move the blocks from your main.tf file into as many separate .tf files in the same directory as you like, and Terraform will consider the configuration to be exactly equivalent as long as you change nothing in the contents of those blocks while you do it. (There is a special case for Override Files that makes the above not strictly true. As long as you avoid naming any of your files override.tf or with an _override.tf suffix that special case will not apply, but I'm mentioning it just for completeness.)
Terraform
58,001,764
87
How am I able to execute the following command: terraform apply #=> . . . Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: without the interactive prompt that follows?
terraform apply -auto-approve https://www.terraform.io/docs/commands/apply.html#auto-approve
Terraform
59,958,294
81
I'm getting confused when using Terraform to provision an auto-scaling group. Should I use launch configuration or launch template for EC2 properties, such as which AMI, instance types, ...? I don't know what the difference is between them, which we should use and why they exist?
Launch templates (LTs) are newer than launch configurations (LCs) and provide more options to work with. Thus, the AWS documentation recommends use of launch templates (LTs) over launch configuration (LCs): We recommend that you create Auto Scaling groups from launch templates to ensure that you're getting the latest features from Amazon EC2. One of the practical key differences between LT and LC is the fact that LC is immutable. Once you define it, you can't edit it. Only a replacement is an option. However, a single LT can have multiple versions: defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. Also LTs provide more EC2 options for you to configure, for example, dedicated hosting can be set only using a LT. Similarly, ability to use T2 unlimited burst credit option is only available in a LT. Thus if you can, its better to follow AWS recommendation and use LT.
Terraform
61,981,663
74
I want to attach multiple IAM Policy ARNs to a single IAM Role. One method is to create a new policy with privileges of all the policies (multiple policies). But in AWS, we have some predefined IAM policies like AmazonEC2FullAccess, AmazomS3FullAccess, etc. I want to use a combination of these for my role. I could not find a way to do so in the Terraform documentation. As per documentation we can use aws_iam_role_policy_attachment to attach a policy to a role, but not multiple policies to a role as this is available via AWS console. Please let me know if there is a method to do the same or is it still a feature to be added. The Terraform version I use is v0.9.5
For Terraform versions >= 0.12 the cleanest way to add multiple policies is probably something like this: resource "aws_iam_role_policy_attachment" "role-policy-attachment" { for_each = toset([ "arn:aws:iam::aws:policy/AmazonEC2FullAccess", "arn:aws:iam::aws:policy/AmazonS3FullAccess" ]) role = var.iam_role_name policy_arn = each.value } As described in Pranshu Verma's answer, the list of policies can also be put into a variable. Using for_each in favor of count has the advantage, that insertions to the list are properly recognized by terraform so that it would really only add one policy, while with count all policies after the insertion would be changed (this is described in detail in this blog post)
Terraform
45,486,041
73
I want to create a Terraform configuration for DynamoDB table with multiple (> 10) attributes. And I have no need to add all attributes as an index to global_secondary_index or local_secondary_index. But when I run terraform plan command I have next error: All attributes must be indexed. Unused attributes: ... I found the validation check in the Terraform repository in validateDynamoDbTableAttributes function. But also as I know the best practice is that each table in DynamoDB is limited to a maximum of five global secondary indexes and five local secondary indexes from General Guidelines for Secondary Indexes in DynamoDB. And since I have more than 10 attributes it looks like a problem to me. What I would like to understand why all attributes must be indexed and what to do in case if you have a big number of attributes. Thanks!
You do not have to define every attribute you want to use up front when creating your table. attribute blocks inside aws_dynamodb_table resources are not defining which attributes you can use in your application. They are defining the key schema for the table and indexes. For example, the following Terraform defines a table with only a hash key: resource "aws_dynamodb_table" "test" { name = "test-table-name" read_capacity = 10 write_capacity = 10 hash_key = "Attribute1" attribute { name = "Attribute1" type = "S" } } Every item in this table has Attribute1, but you can create additional attributes with your application This means that you can have your 10+ attributes as long as you don't need to define them in an AttributeDefinition, and since you say you don't need them to be indexed, you'll be fine. For some discussion of the confusion (attribute is confusing and doesn't match the DynamoDB API), see this pull request.
Terraform
50,006,885
69
I have a Terraform configuration targeting deployment on AWS. It applies beautifully when using an IAM user that has permission to do anything (i.e. {actions: ["*"], resources: ["*"]}. In pursuit of automating the application of this Terraform configuration, I want to determine the minimum set of permissions necessary to apply the configuration initially and effect subsequent changes. I specifically want to avoid giving overbroad permissions in policy, e.g. {actions: ["s3:*"], resources: ["*"]}. So far, I'm simply running terraform apply until an error occurs. I look at the output or at the terraform log output to see what API call failed and then add it to the deployment user policy. EC2 and S3 are particularly frustrating because the name of the actions seems to not necessarily align with the API method name. I'm several hours into this with easy way to tell how far long I am. Is there a more efficient way to do this? It'd be really nice if Terraform advised me what permission/action I need but that's a product enhancement best left to Hashicorp.
Here is another approach, similar to what was said above, but without getting into CloudTrail - Give full permissions to your IAM user. Run TF_LOG=trace terraform apply --auto-approve &> log.log Run cat log.log | grep "DEBUG: Request" You will get a list of all AWS Actions used.
Terraform
51,273,227
69
First off - apologies - I’m extremely new (3 hours in!) to using terraform. I am looking to try and use the value of a variable inside the declaration of another variable. Below is my code - what am I doing wrong? variables.tf: variable "EnvironmentName" { type = "string" } variable "tags" { type = "map" default = { Environment = "${var.EnvironmentName}" CostCentre = "C1234" Project = "TerraformTest" Department = "Systems" } } Variables-dev.tfvars: EnvShortName = "Dev" EnvironmentName = "Development1" #Location Location = "westeurope" main.tf: resource β€œazurerm_resource_group” β€œTestAppRG” { name = β€œEUW-RGs-${var.EnvShortName}” location = β€œ${var.Location}” tags = β€œ${var.tags}” } I am getting the following error: Error: Variables not allowed on variables.tf line 18, in variable β€œtags”: 18: Environment = β€œ${var.EnvironmentName}” Variables may not be used here. I understand that the error message is fairly self explanatory and it is probably my approach that is wrong - but how do I use a variable in the definition of another variable map? is this even possible? I will be standing up multiple resources - so want the tags to be built as a map and be passed into each resource - but I also want to recycle the map with other tfvars files to deploy multiple instances for different teams to work on.
Terraform does not support variables inside a variable. If you want to generate a value based on two or more variables then you can try Terraform locals. You can define the locals like this: locals { tags = { Environment = "${var.EnvironmentName}" CostCentre = "C1234" Project = "TerraformTest" Department = "Systems" } } And then you can access them using local.tags: resource β€œazurerm_resource_group” β€œTestAppRG” { name = β€œEUW-RGs-${var.EnvShortName}” location = β€œ${var.Location}” tags = β€œ${local.tags}” }
Terraform
58,841,060
66
I was able to create a bucket in an AWS S3 using this link. I used the following code to create a bucket: resource "aws_s3_bucket" "b" { bucket = "my_tf_test_bucket" acl = "private" } Now I wanted to create folders inside the bucket, say Folder1. I found the link for creating an S3 object. But this has a mandatory parameter source. I am not sure what this value have to, since my intent is to create a folder inside the S3 bucket.
For running Terraform on Mac or Linux, the following will do what you want: resource "aws_s3_bucket_object" "folder1" { bucket = "${aws_s3_bucket.b.id}" acl = "private" key = "Folder1/" source = "/dev/null" } If you're on Windows you can use an empty file. While folks will be pedantic about S3 not having folders, there are a number of operations where having an object placeholder for a key prefix (otherwise called a folder) make life easier. Like S3 sync for example.
Terraform
37,491,893
62
I want access to my AWS Account ID in terraform. I am able to get at it with aws_caller_identity per the documentation. How do I then use the variable I created? In the below case I am trying to use it in an S3 bucket name: data "aws_caller_identity" "current" {} output "account_id" { value = data.aws_caller_identity.current.account_id } resource "aws_s3_bucket" "test-bucket" { bucket = "test-bucket-${account_id}" } Trying to use the account_id variable in this way gives me the error A reference to a resource type must be followed by at least one attribute access, specifying the resource name. I expect I'm not calling it correctly?
If you have a data "aws_caller_identity" "current" {} then you need to define a local for that value: locals { account_id = data.aws_caller_identity.current.account_id } and then use it like output "account_id" { value = local.account_id } resource "aws_s3_bucket" "test-bucket" { bucket = "test-bucket-${local.account_id}" } Terraform resolves the locals based on their dependencies so you can create locals that depend on other locals, on resources, on data blocks, etc.
Terraform
68,397,972
61
What is the least painful way to migrate state of resources from one project (i.e., move a module invocation) to another, particularly when using remote state storage? While refactoring is relatively straightforward within the same state file (i.e., take this resource and move it to a submodule or vice-versa), I don't see an alternative to JSON surgery for refactoring into different state files, particularly if we use remote (S3) state (i.e., take this submodule and move it to another project).
The least painful way I’ve found is to pull both remote states local, move the modules/resources between the two, then push back up. Also remember, if you’re moving a module, don’t move the individual resources; move the whole module. For example: cd dirA terraform state pull > ../dirA.tfstate cd ../dirB terraform state pull > ../dirB.tfstate terraform state mv -state=../dirA.tfstate -state-out=../dirB.tfstate module.foo module.foo terraform state push ../dirB.tfstate # verify state was moved terraform state list | grep foo cd ../dirA terraform state push ../dirA.tfstate Unfortunately, the terraform state mv command doesn’t support specifying two remote backends, so this is the easiest way I’ve found to move state between multiple remotes.
Terraform
50,400,007
60
I have the following condition: resource "aws_elastic_beanstalk_application" "service" { appversion_lifecycle { service_role = "service-role" delete_source_from_s3 = "${var.env == "production" ? false : true}" } } If var.env is set to production, I get the result I want. However if var.env is not defined, terraform plan will fail because the variable was never defined. How can I get this to work, without ever having to define that variable?
Seems these days you can also use try to check if something is set. try(var.env, false) After that your code will work since var.env is now defined with the value false even if var.env was never defined somewhere. https://www.terraform.io/docs/configuration/functions/try.html
Terraform
53,200,585
59
I've been using Terraform to build my AWS stack and have been enjoying it. If it was to be used in a commercial setting the configuration would need to be reused for different environments (e.g. QA, STAGING, PROD). How would I be able to achieve this? Would I need to create a wrapper script that makes calls to terraform's cli while passing in different state files per environment like below? I'm wondering if there's a more native solution provided by Terraform. terraform apply -state=qa.tfstate
I suggest you take a look at the hashicorp best-practices repo, which has quite a nice setup for dealing with different environments (similar to what James Woolfenden suggested). We're using a similar setup, and it works quite nicely. However, this best-practices repo assumes you're using Atlas, which we're not. We've created quite an elaborate Rakefile, which basically (going by the best-practices repo again) gets all the subfolders of /terraform/providers/aws, and exposes them as different builds using namespaces. So our rake -T output would list the following tasks: us_east_1_prod:init us_east_1_prod:plan us_east_1_prod:apply us_east_1_staging:init us_east_1_staging:plan us_east_1_staging:apply This separation prevents changes which might be exclusive to dev to accidentally affect (or worse, destroy) something in prod, as it's a different state file. It also allows testing a change in dev/staging before actually applying it to prod. Also, I recently stumbled upon this little write up, which basically shows what might happen if you keep everything together: https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/
Terraform
37,005,303
58
In AWS API Gateway, I have a endpoint defined as /users/{userId}/someAction, and I'm trying to recreate this with terraform I would start having some sort of linked gateway_resource chain like so... resource "aws_api_gateway_resource" "Users" { rest_api_id = "${var.rest_api_id}" parent_id = "${var.parent_id}" path_part = "users" } //{userId} here? resource "aws_api_gateway_resource" "SomeAction" { rest_api_id = "${var.rest_api_id}" parent_id = "${aws_api_gateway_resource.UserIdReference.id}" path_part = "someAction" } In which I then define the aws_api_gateway_method and everything else. How do I define this endpoint in terraform? The terraform documentation and examples don't cover this use case.
You need to define a resource whose path_part is the parameter you want to use: // List resource "aws_api_gateway_resource" "accounts" { rest_api_id = var.gateway_id parent_id = aws_api_gateway_resource.finance.id path_part = "accounts" } // Unit resource "aws_api_gateway_resource" "account" { rest_api_id = var.gateway_id parent_id = aws_api_gateway_resource.accounts.id path_part = "{accountId}" } Then you create the method and enable the path parameter: resource "aws_api_gateway_method" "get-account" { rest_api_id = var.gateway_id resource_id = var.resource_id http_method = "GET" authorization = "NONE" request_parameters = { "method.request.path.accountId" = true } } And finally you can successfully create the mapping within the integration: resource "aws_api_gateway_integration" "get-account-integration" { rest_api_id = var.gateway_id resource_id = var.resource_id http_method = aws_api_gateway_method.get-account.http_method type = "HTTP" integration_http_method = "GET" uri = "/integration/accounts/{id}" passthrough_behavior = "WHEN_NO_MATCH" request_parameters = { "integration.request.path.id" = "method.request.path.accountId" } } The method needs to be there - and with the parameter enabled - in order for the integration mapping to work.
Terraform
39,040,739
58