cwebber.net

Random thoughts about life and systems administration

I Am Not a Coder

| Comments

The last week at PuppetConf was an absolute blast. It was great to catch up with all the amazing folks doing amazing work. But, there was one thing that bothered me quite a bit. In many different contexts I heard people more or less say, “I can’t write Ruby, I’m not a coder.” Or, just as bad, “I use Puppet because I am not a coder.”

These sentiments bother me in so many ways that I could probably sit and rant for half the day. But really, after thinking about it quite a bit, there are a few key reasons why this idea just doesn’t sit well with me. I really think that if you are someone that says something similar to the above, you should rethink things.

The Technical

Puppet is why I know Ruby

I feel like an old man in saying this, but back in my day, we didn’t have this fancy facter.d stuff. We had to write our custom facts using Ruby and we liked it. Ok, so maybe the liking it part is a bit of a stretch, but I definitely liked the results. When I wrote those facts, I didn’t have any clue what a method was or how objects worked, I just knew that when I pasted the right things in and tweaked them a bit, I was able to get information I needed, and that was awesome.

I don’t actually know Ruby

So, anyone that actually thinks I know Ruby, probably has never actually looked at my code. While I am not scared to fire up irb or pry and copy pasta some code in to solve problems, the idea of a large scale Rails or Sinatra App does not sit well with me. I am absolutely a coder, but I am by no means much more than a Junior Ruby Developer when it comes time to my Ruby skills. Like most good SysAdmins, I just happen to be good at Google.

Have I ever mentioned how much I dislike CPAN?

Whenever I went to go do anything in Perl, I found myself reaching for a module from CPAN. The CPAN topic is probably worth its own discussion, and, to be fair, has probably gotten a bit better since the last time I tackled this. But, as a result of the pain of CPAN and the nature of the environment I was working in at the time, installing new Perl modules wasn’t an option and even if it was, it wasn’t really feasible.

Enter Ruby. As a result of running Puppet everywhere, I had Ruby everywhere for free. Combine that with the fact that the Ruby Standard Library is kinda awesome, you get the ability to do some phenomenal things. Since most of my job involved ALL of the systems, I needed to be able to write scripts that would work everywhere in our infrastructure. With the power of Ruby’s Standard Library and the fact that the code was actually readable, even a week later in most cases, it made a great way to do systems stuff, everywhere.

Level up your Puppet game

The real power in Puppet can be found in writing custom functions and custom types and providers. It is the custom functions that let you actually move some of the crazy that usually gets done inside of templates, thus hidden from plain view, and move it back into the manifest where it can be seen in context with all the other things going on in the manifest. The custom types and providers give you the benefit of being able to manage ALL THE THINGS in Puppet. Do you want to manage DNS about systems inside of Puppet? How about adding systems to that new fancy monitoring SaaS? All of that gets done in custom types and providers.

The Squishy Stuff

You are already a coder

Are you writing Puppet Code? Are you using ERB? If so, you are already a coder. You are already dealing with making decisions about the APIs you present or don’t present. You are dealing with control structures and code organization. You are a coder.

Code is eating the world

So I am not going to actually dig into this except to say, code is becoming an integral part of our lives as Operations and Systems Administration Professionals. Whether it be with Chef, Puppet, Ansible or the next thing, being a coder is going to be at the center of the jobs that we do.

Future of HangOps

| Comments

Back in August of 2012, Brandon Burton (@solarce) and Jordan Sissel (@jordansissel) started an amazing thing, HangOps. The idea behind HangOps was simple, get ops folks together for an hour or so a week to grab coffee remotely and talk shop. Between the number of us that were remote or ops teams of one, it was a great way to talk to others in the field without waiting for the one or two conferences a year most of us are able to attend.

Over the last two years it has grown into something amazing. We have had a chance to not only connect with the luminaries of our community but work together to really understand the successes and failures of our peers. It has been a place where you could come ask questions and get honest answers.

But, just as our systems change over time, so must HangOps. After two years of making HangOps a thing, Brandon has handed me the reins so he can focus on some new job responsibilities and his family. While he will still join us, more or less regularly, he has handed off the responsibility of continuing to make HangOps a thing. I am super grateful for the guidance that Brandon has provided and for getting this all started.

So, where to from here? First, let’s get back on a regular cadence. The next HangOps will be at our usual time 11:00 AM PDT (18:00 UTC) on Friday, August 29th, 2014. Let’s talk monitoring and testing, whether that ends up being a discussion of test-kitchen, chefspec, and puppet-rspec type things or more focused on sensu and nagios type things is up to the panel.

Thanks again to everyone that has been part of HangOps over the last two years. Let’s make it another awesome two years!

Joining Chef, the Hard Parts

| Comments

Anybody that has spent any time with me recently has probably heard about how much I love working for Chef. I have been meaning to write a post about how great the transition to Chef has been for the last month, but just never seem to find the time. The people are amazing, the company is amazing and the product is amazing.

But, with any good job, there comes new challenges. My new role at Chef has meant a lot more writing. I have probably written more in an official capacity since joining Chef than I did on my personal blog the entire time I was at Demand Media. The context switch is hard. Active voice is hard.

It has been much harder than I expected to get in the frame of mind to write at any length. I find that I frequently sit down to write a blog post or an email that needs to go out to the mailing list and am easily distracted by technical things. Even as I write this post, I am cmd-tabbing back and forth to a chef-client run (CCR). Finding that place of focus for me is wicked hard when there are so many fun distractions around.

To add to the adventure, through the amazing critique of my coworkers, I have realized I still write like I am in an academic program instead of doing business writing. What that means is that I tend to use 50 words to explain something when 10 words will do. To add to that, I am super guilty of using passive voice in my writing. I am thankful to @sethvargo, @jtimberman, and @btmspox for their willingness to provide honest and constructive feedback.

What I want to do right now is commit to writing daily or something akin to that, but I know that life over the next month will be too busy to even begin a regular writing exercise. So from now until vacation starts in two weeks, I am going to actively pay attention to when I am avoiding writing and work to do it more frequently. I truly believe that the only way to improve at these sort of things is to do them more often to get better.

Kitchen-tmux

| Comments

The more I work with test-kitchen, the more I have wanted a different workflow. Essentially, I really liked the idea of concurrency, but I struggled to parse the output. As a result, I found myself opening a number of windows in tmux and running kitchen test <OS>.

This idea combined with some Saturday night hacking has resulted in kitchen-tmux. Instead of going through each OS, I create a new session and then a window in tmux for each of the instances in test-kitchen. From there each window kicks off kitchen test for that instance. I haven’t used it as part of my workflow yet, but playing with it a bit, this seems like it is going to be a huge win.

You can find the code on GitHub or below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash

SESSION=${PWD##*/}
PWD=$(pwd)
tmp=$TMUX

unset TMUX

tmux -2 new-session -d -s $SESSION

for x in $(kitchen list -b); do
  tmux new-window -t $SESSION -n "$x"
  tmux send-keys -t $SESSION  "kitchen test $x
"
done

tmux select-window -t $SESSION:1

export TMUX=$tmp
tmux switch-client -t $SESSION

Shaving the Dev/Working Environment Yak

| Comments

So one of the more interesting things about starting to work at a new company is getting the development environment right. For me, what gets really interesting is understanding all of the things that this involves. I am going to skip discussing things like iTerm2 and my tmux config and really focus in on what I have been doing to work with Chef and adapt to the way we do things as we deal with Cookbooks.

Starting with Chef DK

I lucked out that Chef DK was released right before I started at Chef. As you can probably guess, that is what I decided to use as my starting point. I have to say, one of the things that has impressed me the most since moving to doing Chef regularly is all the various extension points. As I started work on things with Supermarket, the ability to quickly build something like knife-supermarket was a huge win.

So to that end, one of the first things I learned how to use as part of Chef DK was chef gem install. In general, I have installed things as I have needed them. The biggest thing I have started with is installing the stove gem by Seth Vargo. This gem gives me an easy way to setup all the things around publishing new versions of cookbooks to the community site. In addition to that, I am a huge fan of guard. I use guard to watch for changes to files and run my tests automatically. To that end, I have installed the following guard plugins:

  • guard-foodcritic
  • guard-rubocop
  • guard-rspec

All of these were installed by running chef gem install to make sure they were installed into the same world that my Chef DK environment uses.

To The Cloud

So one of the more interesting parts of my job at this point is that multi-platform support is not only a nice to have, it is a basic necessity for many of the cookbooks we support. While this is insanely cool, it does make for an interesting adventure trying to get setup to test across the gamut of operating systems. For starters, there isn’t really a single cloud provider that you can test all the OS variants against so you have to use more than one provider. Second, it is actually beneficial for us to use all of the major cloud providers to make sure we see when things change and break for our users.

What that translates to is a lot of stuff to install and setup to get going. In many, hopefully most very soon, of the Chef supported cookbooks you will find a .kitchen.cloud.yml that is setup to do testing across cloud platforms and OS distributions. If you take a look at a typical .kitchen.cloud.yml file like the one found in the opscode-cookbooks/mysql repo on GitHub, you will see a myriad of different providers and environment variables. For now, I am going to gloss over the environment variables. It is enough to say that there are a lot of them and not always intuitive so please hit me up on freenode, if you have any questions about a particular env variable. To enable all the different providers, I have installed a number of test-kitchen plugins. You can’t even run a kitchen list until they exist when using the cloud yaml file. The test-kitchen plugins I have installed are:

As mentioned before, I installed the above using the chef gem install command to ensure that all of those kitchen plugins work with the test-kitchen install that comes with Chef DK. The icing on the cake that makes it all work is that I set KITCHEN_YAML=.kitchen.cloud.yml so that test-kitchen takes advantage of the awesome config that we have setup.

Rub a Little RVM on It

So… I have a little confession to make. While I still don’t consider myself a developer, I have found that over the years writing ruby has been a lot of fun for those pesky web interfaces and complicated scripts that I write. Most of my ruby code would make a good dev cry, but it is still an easy language to get work done in. To that end, having a sane ruby dev environment has also been important for various reasons. Even in the time I have worked at Chef I have found myself writing code to automate things. (For example, I wrote a tool to grab all the neat things going on with Supermarket. That code is on GitHub as cwebberOps/supermarket-report.) While I get that bundler is awesome, I still like to be able to separate out my ruby and my gemsets.

So I looked at chruby and chgems and a little at rbenv. Both left me wanting to go back to rvm, so that is what I did. The interesting thing is that if you have a ruby selected via rvm you lose out on the awesome that is Chef DK. So to get things the way I want them, I have set my default ruby to the system ruby in rvm by running rvm use system --default. For whatever reason, I then have to load a new shell for my prompt to not complain about rvm-prompt being missing. This setup has worked well. My general workflow is to open a new session in tmux and when I switch into a project have the .rvmrc change me to the right ruby. An alternative approach can be found over on Joshua Timberman’s blog about how he uses Chef DK.

All things equal, the setup of the software has been smooth. The only real places I have run into issues is getting all the environment variables setup and accounts created. While having two workstations I use regularly has added to the confusion at times, as I have gotten things squared away, things just tend to work. A huge shout out to the folks working hard on Chef DK, it has for sure made my experience delightful.

Finding the Edge

| Comments

As part of some of the work I am doing with Supermarket, Chef’s new community site, we needed to make some updates to the way knife worked, specifically giving it the ability to talk to more than one site. So… I set about contacting the folks that do dev work on knife to figure out how the functionality would get added.

Just as soon as I asked in the Dev room on HipChat who I should contact from the Dev team, my boss hit me up. A few lines of chat later and he suggested I open a pull request with the code that adds the needed functionality. Holy crap. What did I just get myself into? My gut reaction was something along the lines of, “Really, me, write Ruby? Have you seen my code?” Well, a few minutes later I had snapped out of it and realized I just needed to level up and, as Opbeat would put it, “Fuck it, Ship it”. After a few more lines of discussion in HipChat, we decided that a knife plugin was the easiest way to get things out and usable quickly.

Digging In

So off I went to create my first knife plugin. Having dabbled in Ruby a bit over the years, I recognized pretty quickly that I could just inherit the functionality from the current knife cookbook site commands and make the few changes that were needed to get a new option setup. Between the following two docs, I was able to get the download command extended to do what I wanted:

So cool, I had code. For testing I just symlinked each file into ~/.chef/plugins/knife/. By doing this I was able to just run knife and test that everything worked. Interesting note for those that go look at the code, to verify things worked, I pointed at the staging site for Supermarket which is located at http://supermarket-staging.getchef.com. Once I did that, I pointed at the current place knife is set to look, http://cookbooks.opscode.com. From there, It didn’t take much to override each of the various methods that mentioned the current cookbooks site. I was off to the races pretty quickly.

Interestingly enough, the most confusing part of the whole adventure was dealing with copyright. More specifically, who to assign authorship to was a bit confusing. The license was already chosen for me which made it easy, but deciding how to handle who to list as authors was kind of weird. In reality, most of the code was copy-pasta, so did that mean I should do a git blame and figure out who was responsible for the lines of code that came from the other subcommands? If I was doing that, should I update their email addresses since we were now Chef and not Opscode? In the end, I decided that I would remove the other authors because of the amount of code and really, I didn’t want them to get random questions about a project they had likely never heard of.

Not only was this my first knife plugin, this was my first Ruby Gem. After some boilerplate borrowed from the knife-openstack plugin, and creating an account on rubygems.org, it was super simple to get a gem uploaded and working. A quick chef gem install knife-supermarket and I was able to find my first bug. The gem process was super smooth and pretty quickly I was able to get a few fixes in place and out the door.

The real take away from all of this was the reminder that I enjoy life the most when living out on the edge of my comfort zone. As I approach that edge, it means that I am learning more and able to experience new things. As my time at Chef begins, I am looking forward to more opportunities to get pushed to the edge and over the edge of my comfort zone.

The Results

If you are running Chef DK you can install the plugin by running:

$ chef gem install knife-supermarket

Otherwise, you can install the gem but running:

$ gem install knife-supermarket

The code can be found at:

Finally, the gem can be found at:

Cooking Up Something New

| Comments

After an amazing adventure with the great people at Demand Media, I on my way to cook up something new. If the cheesy pun didn’t give it away, I will be joining the community team at Chef on April 28th. While I will miss the amazing folks at Demand, I am super excited about the new adventures ahead.

My role at Chef will be working as a Community Software Engineer working to help make the experience of using and contributing to community cookbooks delightful. I am looking forward to working with all the amazing people in our community.

I am looking forward to the new adventures with Chef and the ability to focus on making the lives of others better as I work with people to make it easier to solve problems with cookbooks. The ability to focus on shared and open-source infrastructure code is something I have wanted to do for a long time. This will give me the chance to help encapsulate the years of yak shaving into code that will hopefully let others do more awesome.

Just because I want to explicitly state it, I still have lots of love for Puppet and the folks at Puppet Labs. I am excited to continue to see where the Puppet ecosystem goes. While I for sure have some definite reasons why I like Chef, Puppet is still an amazing tool.

If you are going to be at #ChefConf I will be around all day on Wednesday so hit me up.

A Year Remote

| Comments

In early December of last year, I hired on at Demand Media to be an 80% remote employee. While, in general, the culture at Demand around being in the office is strong, there are actually a fair number of remote employees at Demand. For me personally, I live about 90 miles from our office in Santa Monica and usually come in on Tuesdays.

The journey has definitely had its ups and downs but it has been amazingly positive and enjoyable. When I first considered the idea of working for Demand, I reached out to a few friends I knew that were also remote. The interesting thing about this is that they all advised against me taking the position because not only was it partially remote, but because I was going to be “the remote guy” on a team that was in the office. After having spent a year in that world, I can definitely see why they had concerns and how quickly those situations could turn sour.

Because I am a little on the crazy side, I took the job. It was a chance to get some experience working remote and the team seemed like a good fit. Needless to say, I made the right decision. I feel like I have been super successful working at Demand and owe a huge part of that to the team I work with. While tools help a lot, it is the culture of my team and the other teams I work with that allow me to be effective.

My team, the Media Infrastructure group, takes an active role in making sure I am not left out of things. They always make sure the conference calls are setup and I am there via video chat when possible. Not only that, there is an active effort to make sure conversations are had over IM so that I can be a part of them and am not left out. On the occasions where I am not able to be a part of whatever was going on in the office, they make a concerted effort to follow up with me and make sure I am aware of what is going on.

In addition, the teams I interact with have adapted to the fact that I am remote and do a great job keeping me in the loop. There are plenty of screenshots of the teams being on a video chat with me as we launched a new site, or you can walk into one of the weekly lunch meetings to see my gigantic head on the video display. Most of the developers I work with have made a point to make sure they have Jabber up so I can reach out to them when there are issues.

The impact of tools in this space cannot be understated. The conference bridges, Jabber and the Vidyo system are what allow all of this to be possible. I want to take a second to thank the amazing teams in my IT group that made the Vidyo system happen and keep things like Jabber up and running. While tools like Google Hangouts and other instant messaging systems exist, the tools that IT provide work better and have allowed for a much more seamless environment. In particular, when we were in a previous office location, I used to dread meetings where I had to call in on the conference phone. Half the time I couldn’t hear what was being said and the other half the time I couldn’t chime in when I needed to. The Vidyo systems that are now in the big conference rooms have made that problem go away almost completely, and I am now one of the loudest people in the room.

It is interesting to look at some of the benefits of being remote. Most people focus on what the remote employee gets out of it, but I think the employer gets the most benefit, especially with the industry we are in. As a result of being remote, my team regularly practices the same techniques we use during incident response. We all know how to get on the conference bridges and are comfortable working and IMing at the same time. (Ok so that is a bit of a stretch… I suck at multi-tasking.) Even more interesting, when I get a call in the middle of the night about a problem, I generally walk into my normal work environment so there is less of a need to adapt to the new situation. I turn on the display I have setup as an information radiator that displays dashboards and logs and am able to quickly start getting a feel for what is wrong, even if I still have to setup my laptop.

Another interesting thing that has happened on a few occasions is that because I am in a different location, I am not affected by local issues. There was a situation recently where the power had failed in the office and because I was remote and one of my teammates was still at home we were able to address issues that couldn’t be handled by the rest of our team in the office. It means that I am occasionally able to route around problems or see things differently because I am not in the office.

I would be doing a disservice if I didn’t talk a little about the challenges that I have faced over the last year. Most of the problems I have faced are not ones that can be overcome by my team or the company, they are the reason that remote work isn’t for everyone.

To start, 90 miles is a long way. Without stopping and without traffic I can do the drive in approximately 1hr and 20 mins. For those that have ever been to LA, the “without traffic” part is, by and large, hard to achieve. To help with that I have adjusted my schedule a bit and try to be out of the house by 4:00 am on days I am in the office, resulting in about 1hr, 30 minute drive to the office. And, because traffic in the evening is bad, I don’t head for home any earlier than 6:00 pm and it is usually closer to 7:00 pm by the time I leave resulting in about a 2hr drive home. While I do come in twice a week on a very rare occasion, it is obvious by Friday that I have done so by the toll it takes on my body.

To add to the theme of long days, I tend to spend a lot of time working. This is partially because I just love what I do so much and partially because I really don’t ever leave work. I try to be good about getting up from the computer no later than 6:00 pm, but that doesn’t always happen and it isn’t out of the ordinary for me to sit down to dinner with the family and then be right back at it. My wife had been amazingly understanding and has done a great job in helping me to set those boundaries and not get sucked too far into the work I enjoy so much.

While my team does a great job at keeping me included, there are still plenty of things I miss out on because I am remote. I don’t get to participate in a lot of the things like the Monthly Birthday and Anniversary goodies most months, and I am almost never in the office for lunch on Friday.

Additionally, there have been a few occasions here and there where there has been a major incident or something going on and lots of brainstorming and discussion was going on in the office and I sat at my computer wondering what was happening because everyone had stopped responding. This has more or less gone away because of Vidyo, but it is a horrible feeling when you are sitting at your computer, the site is down or heavily impacted and you feel helpless, not knowing what to do next.

All and all, it has been an amazing experience. My team, the teams I interact with and everyone at Demand has always been supportive. While the drive is a bit crazy and the day is long, I love my one day a week in the office and am looking forward to the next few years at Demand.

Puppetbestpractices.com Hurdle Number One

| Comments

As I start to get things in order for puppetbestpractices.com, I am stuck on a relatively big decision, how do I setup the site? I basically see this two options:

  1. I setup Wordpress and grab a pretty theme and some plugins.
  2. I take a bit more time and setup the site using a static page generator like Jekyll.

The first aspect that makes the decision hard is collaboration. If I use Jekyll or some other static site generator, I can throw the page up on GitHub and anyone that wants to contribute can just create a pull request. This makes contributions super simple. Wordpress has the opposite problem. The only way I can accept contributions is to get emails and comments, or to hand out accounts. Neither of of those options are appealing.

From there the question of design comes into play. I almost titled this post “Design Eye for the SysAdmin Guy” because I suck so hard at it. IMHO, Wordpress is the clear winner in this battle. It has tons of free and easy to use themes etc. Jekyll or maybe even Octopress have a much more limited selection of themes and by and large require me to actually know what I am doing, which, as I mentioned before is not the case. If the themes from something like Themeforest made sense from a license standpoint, I would gladly pay the roughly $15 bucks for a starting point.

So yeah, this has been my frustration since PuppetConf, how to build the site. I guess this is just a reminder that anything worth doing is going to be hard.

That Sinking Feeling

| Comments

On Friday of PuppetConf I jumped on a video call with my coworker back at the office to get news that the president of engineering had asked for me to join the team supporting the new site that we had recently acquired. Everything about this was super exciting. It brought with it the chance to finally do something production in AWS and the potential to be a little more focused on a single property.

While I have generally kept up with how AWS works and have dabbled a bit here and there, I have never done anything that would constitute production. I had heard about the occasional shit show that is EBS and was able to grok how all the components came together. In my excitement, I spent the weekend using Cloud Formation to spin up simple HA setups and then tying in New Relic for monitoring.

Then Monday hit. Like a ton of bricks. So many unanswered questions and so much new infrastructure to learn. More technologies to contend with that I have never gotten super good at like MySQL and MongoDB. The overwhelmingness and uncertainty of things like team makeup, on-call and such added to the technical things I had in front of me.

The rest of the week was a blur. I made a few changes here and there but in general avoided doing so. Even basic changes like setting up nscd seemed to cause problems. I really started questioning if I was in over my head.

This all got me thinking about my own fears and apprehension about things. First of all, I realized the emotions I go through with these kinds of new technical changes are standard for me. I tend to be overly cautious, hesitant to make changes, and easily spooked when the metrics don’t look right.

But why? Why do these things have such an affect on me? In many ways experience is to blame. Not just a lack of experience but the bad experiences along the way. I know what the command or action is “supposed” to do but I have come to question, is that what is actually going to happen? While I have faith in reading the docs, you just don’t know until you have run the command a few times what to expect.

While I understand security groups, etc. there is always the question of environment separation. I know that as long as I do the right things, nothing bad is going to happen and the world is going to be separate. That said my API keys have just as much power to kill prod instances as they do dev instances. And, as I have already discussed, I have a hard time trusting the tools.

Finally, the known unknowns kill me. The more I learn about MySQL the more I understand how little I actually know. While I will learn and become more familiar with its inner workings, I continue to be a bit hesitant to make large changes because, almost without fail, the database is the thing that breaks everything.

So what now? How do I work through these inner demons? I guess the simplest answer is to experiment and move forward. To remind myself that this is no different than any of the other times I have wadded knee deep into new infrastructure. To setup mock production environments and get a feel for what is actually going on. To do some reading and reach out to my amazing friends for help and advice.

That said, taking the weekend to step away and reflect has helped tremendously. So here is to an amazing new week ahead of me.