Secure Function: Because security isn't optional.

Project release: AWS Okta Keyman

Available today; AWS Okta Keyman. This package is a fork of previous work by Nextdoor, Inc. that adds Duo Auth support and has other features already on the roadmap.

This package allows the user, who uses Okta with AWS today, to authenticate with Okta, use that to authenticate to AWS, and then pulls down temporary credentials (access key and secret key) for interacting with the AWS APIs. This allows for users to have access to AWS APIs without long-lived API keys stored on their dev systems. This helps protect the AWS resources as the keys are only valid for at most an hour so an unintended disclosure or leak has a very short window of risk before the keys become invalid. It also helps by enforcing the idea of continually rotating keys; not so different from what the on-box EC2 experience is like when using IAM Roles for EC2.

If you are using Okta to log in to AWS give it a try today; pip install aws-okta-keyman

The source is available under the Apache 2.0 license.

For more information:

On Interviewing

Recently I made the decision to look at other opportunities and I completed a number of interview loops and many more phone screens and technical phone screens. I thought it might help others a bit to share some of the lessons I took away from what was more than 40 hours of technical interviewing. Due to NDAs there aren’t going to be any interview specifics here but some feedback and tips that I hope can be useful for both sides of the table.

As an interviewee-

  1. Have questions ready. Write them down if it will help them stick in your head. Now come up with more. Seriously. Do even more. Now come up with some you can ask every interviewer you meet. During one of the longer loops I completed every single person on the loop did great at leaving Q&A time. This meant they quickly burned through my specific questions about the company. I adapted to this by coming up with more questions that the interviewer could answer on a more personal level. Here’s a few of my favorites:
    • What is your favorite part of the role/team (if applicable) or company?
    • What is your least favorite part?
    • What is the biggest challenge the organization is facing today?
    • What can I do on my first day or in my first week to have a major impact for you (as my peer, manager, customer)?
    • Tell me (as much as you can) about how your team or org manages it’s technical debt.
  2. Bring copies of your resume. Even in this so-very-digital age of cloud-based HR and PDF resumes sometimes the person interviewing you hasn’t been given a copy or may not have looked at it yet. Your preparedness will pay off.
  3. Be polite and professional, of course, but don’t hold back being you. I had a great time chatting with one of my interviewers about technology we both found exciting even though it was a little off topic. It’s OK to talk about cases where your hobbies drove you to learn about something and you may find your interviewer shares your passion for motorcycles or video games. You may find that you end up spending a few minutes on that side topic but you’ll both leave the room smiling because you got to chat about something you really care about.
  4. Study. Do the tedious practice of refreshing yourself on algorithms you haven’t looked at in ages. Despite the fact that everyone in the room will know you’ll likely never need to worry about things like searching binary trees or sorting linked lists as a part of your work you will probably get asked anyway. Unfortunately that’s still common throughout the industry even though we all know it doesn’t really mean as much as we pretend it does. To that end:


As an interviewer-

  1. When it comes to the coding questions; have them solve something¬†you’ve actually had to solve at work that can be finished in a reasonable time period.
    • Absolutely one of the best experiences of the loops I completed was a company that sent me a coding challenge to start with. They sent me a library and asked me to add functionality to it. This lets them see how I work with existing code as well as demonstrates my ability to read what’s there, understand it, and build on it. During the loop we actually did a code review of my submission and talked through the design. This was great! We were able to discuss the merits and weaknesses in the design I chose in context of future feature growth and refactoring. No learned-this-in-school algorithms. As someone who has learned to write software by doing it and not via 4 years of computer science classes this was a great experience and allowed me to show my skills and knowledge in a very real-world way.
    • Consider using code reviews or a paired programming or debugging session rather than writing out a method from scratch. Doing code reviews for each other and doing them well is an important skill and one often skipped. You may find out that the candidate has never done and may not even believe in code reviews.
  2. Have some code questions (and solutions) ready before things get started.
    • As the interviewer this isn’t your chance to show off to the candidate how clever you are or to push them until they’re lost. A well designed question should have a couple of workable solutions that you’ve already recorded in advance.
    • Adding arbitrary specifications as you go to make the question harder can make the experience more confusing and especially so if the specs end up describing something completely different by the time you’re done. One of my questions started as a simple counting exercise that quickly turned into a multi-parameter search instead. The last bit of added functionality was a pretty strong pivot from something similar to count the occurrence of a thing in a string.
  3. While scheduling sometimes makes things hard and tech companies can be the worst about randomization; please read the resume before getting into the room.
    • We’ve only got a short amount of time to talk and having you spend 5 of those minutes reading me the resume I wrote and sent in isn’t the best use of that time.
    • If you really can’t squeeze it in just ask me to tell you what I did. You’ll get more out of it and it’s much less awkward for me as the candidate.
  4. Try to be language agnostic even if your workplace isn’t.
    • Expecting a candidate to pick up Golang over the weekend before the interview is just unrealistic. Even if they mostly pull it off they won’t be comfortable or fluent.
    • If you know the candidate is strong in Java and Python but you happen to be a Go or Ruby shop try to find someone on the team that knows one of those languages to do the coding questions. This can help the candidate feel more comfortable and makes it easier for you to let them use their preferred language while still having someone that is familiar with it look it over.
  5. Focus on solutions and structure rather than memorization.
    • The candidate is going to be nervous. They’re going to blank on the exact right method or library name that does the thing they did that time that makes the problem you’ve asked easier. If it doesn’t compile or it would stack trace due to a missed character somewhere that’s something they’d find through their own local testing anyway. Pay more attention to their ability to solve the problem reasonably even if they have to pseudocode some pieces they can’t recall at the moment.

Project release: resque-state gem

I’m posting this late as the code has been available for a bit now but I’ve published my first Ruby Gem (fork) on Github; resque-state. It adds more features to the original gem (resque-status) including more interactive-like controls to allow you to run semi-interactive jobs via Resque. The biggest addition was adding pause and revert functionality.

This project came from something I built (and hope to eventually publish) that runs automated rolling deployments to AWS. What the pause functionality gave me was the ability to let a user do a one-box or canary ahead of a full roll as well as the ability to pause a job that might be having troubles. This lets an engineer launch a deployment to an Auto Scaling Group (ASG) and initially add just a single machine. Once that instance is healthy the job then pauses and waits for the engineer to give the deployment the green light to continue. The pause/unpause functionality became one of the critical features to enable safer production releases.

Just added was a revert feature. You could accomplish something similar with on_failure but I thought that might be overloading that functionality a bit. I believe these are two different cases. If a job fails you may not want to undo it because the failure may have been fatal for the job process but not something that actually needs reverted. Maybe there was a network blip the automation didn’t handle well or perhaps you’re able to course-correct without actually pulling back what was done. Revert gives you a separate path for cases where you specifically want to pull back what was done. This can be done from the paused state (for example; a deployment one-box that is no good) as well as just while the job is running.

PRs and constructive feedback are welcome. ūüôā

Consistency matters; even when you disagree

No matter where you fit into any of the great developer debates (vim vs Emacs, tabs vs spaces, 80 vs 120 vs no column limit) the most critical point is consistency. Switching between tabs and spaces inside your own project would certainly prove annoying for others should you share the code even it won’t affect function. When you’re working on a team, though, or as part of a large organization; consistency has to exist not just in your project but across many.

Maybe you’re a tab person. Cool. Go nuts; at home. If your company prefers spaces the reality is that your preference doesn’t matter. Conforming to a shared norm is more important. When you have to share code with others the bottom line is that the consistency is bigger than your preference. This can even escape your own company should you end up open-sourcing your work for others to benefit from.

This is why I prefer to lean on community-driven style guides. I may not agree that Ruby should be written with an 80 character line limit but the community-driven Ruby Style Guide says otherwise and Rubocop defaults to that as well. Given that; it then makes sense that in any project that I plan on making public I should follow these norms as much as possible.

When it comes to Python we have PEP. The old Sun Java guide works but there’s a newer offering from Google that could be used. Older languages aren’t left out; even Perl has guidelines.

It’s hard to get consensus when opinions are strong and nobody’s preference is really wrong (unless they prefer tabs- then they’re wrong). Rather than infighting and company-specific styles that lead to Python that looks like Java and Perl that looks like, well, chaos we should be able to agree to disagree and stick to accepted solutions. I’m going to link a few below. Mostly these are the top or near-top hits on Google if you look for style guides for these languages. Google is the source of some but as the massive engineering powerhouse they are that makes sense.

Decide what to use, commit to move forward with it, and stop wasting time arguing over the right column to end the line at.


Automation is wonderful. People… not so much.

I’ve been working on automating a previously manual process riddled with potential human error and recently I’ve found myself referencing this article from Doug Seven as the poster child for how things can go wrong quickly when the process isn’t very good or your understanding of the system isn’t complete.

Thorough change management practices and peer-reviewed automation can be something that not only saves your time but sometimes your job too. If you can automate it and remove potential human error that’s nearly always the right path. Software can still fail but you can test software and do peer reviews. It’s a little harder to peer review every decision and click someone has to deal with when deploying something into production.

If you haven’t read this before… this is a level of failure you don’t get to see too often.

Knightmare: A DevOps Cautionary Tale

It’s time to say goodbye to

Don’t get this backwards; they’re still very much alive… but they are dead to me. Today marked the second time in the last few months I’ve emailed them to notify them that their service was being utilized to facilitate a phishing campaign. Both times now they have simply ignored me.

No response.


I notified them on April 5th of one of these links. They never replied. That link still exists today though my emails to the hosting provider and the site owner seemed to have landed on someone’s listening ears as their compromised Drupal install was fixed and updated soon after. The phishing link just takes you to the previously-compromised site’s homepage now but it should have been taken down by Bitly.

Today? Well I got a nice auto-emailer this time at least saying thanks for the email but the link continues to get clicked on by unsuspecting users. At the time of writing more than 2,200 clicks have been registered.

They have nothing listed on their knowledge base about phishing though spam is mentioned. Their support auto-emailer lets you happily know it may take two business days for them to get back to you. More than enough time for a phisher to gather tens of thousands of people’s information.

I’ve once again emailed the hosting provider and the domain registrar. No responses from anyone yet but I happen to use the same registrar for a couple of my domains so I’m hoping to get… something… back.

The phishing site is of lower quality this time, at least. Hopefully some people notice the ‘Finish’ button is instead labeled ‘Finnish’ but if they actually clicked on some random Bitly link sent to them via SMS… chances are low.

Phishing site

Hopefully the hosting provider takes notice.

To my original point; it’s time to let Bitly die on the vine if they can’t even acknowledge their part in the theft of user’s personal information and do something to thwart the thieves taking advantage of their service. They have been given the opportunity to stop a phisher in their tracks and they chose to look the other way.

Just stop trusting Bitly links. If the RickRolls weren’t enough to convince you; this should be.

Bitfail logo

Update 23 June 2016: The only company involved that has responded was the registrar who did so not long after the hosted site stopped responding. Better than silence.

Pritunl for AWS VPC with Replicated Servers

After doing some research on VPN alternatives to using AWS’ provided VPN options I recently settled on doing a test with the software Pritunl. The software is an open-source GUI frontend for OpenVPN. It does a nice job of simplifying the management and configuration of the VPN endpoints and, when you pay for Pritunl Enterprise, also includes some other nifty features.

There are several features that are unlocked by paying for the Enterprise license and one of those is Replicated Servers. Replicated Servers gives you a unified backend database (using MongoDB) that stores configuration and user information. This lets you run multiple Pritunl hosts for your users to provide extra endpoints in the event of a failure.

The setup is pretty simple but since I didn’t see any articles or posts covering the setup so I thought it would be good to go ahead and put something together. In this case I’ll be using AWS but the principals are the same no matter where the hosts are.

For this example we’ll be using Ubuntu Server to keep things more provider-agnostic. It’s possible to use Arch Linux or Amazon Linux instead if you prefer that.

First, of course, you’ll need to be logged into the AWS console or have the CLI set up on your machine.

Go ahead and allocate 3 Elastic IPs in your VPC. We’ll use two for the internet-facing hosts and the last will be a way to provide a static IP for the database host. You can use other methods to make the IP stick but this one is the simplest and it allows the host to update from the web.

Set up two EC2 security groups:

  • VPN:
    • TCP/22 open to your IP for SSH
    • TCP/9700 open to your IP for access to the web UI (you may want to open this to /0 later)
    • UDP 25000 open to for the VPN tunnel
  • VPN DB:
    • TCP/22 open to your IP for SSH (you may want to remove this or limit it later on)
    • TCP 27017 open to the VPN SG for the database connection

We’ll start with the database host first. Go ahead and start an instance of the size that suits your deployment and use Ubuntu Server 14.04. You’ll probably want to put this in your private subnet if you have one. Make sure you have set the security group up as above. Assign one of the EIPs to the host and go ahead and connect over SSH.

Once logged in you’ll want to execute the following commands:

Woohoo! Our database lives!

Next let’s start the Pritunl VPN hosts. You can start as many or as few as you need and with the size that suits best but again use Ubuntu Server 14.04. These should be in your public subnet. Assign EIPs for each. Then go ahead and connect over SSH.

For each machine do the following:

You’ll then need to open your browser to https://<host IP>:9700/ and fill in the MongoDB host information. On the first host you’ll then log in with the default user/password of pritunl/pritunl, change the login, and then enter your Enterprise license key. After that log in to each of the rest of the hosts and enter the MongoDB host information.

Now with that finished we have replicated Pritunl configured. Not including bandwidth or the discounts of reserved instances it can be about $90/month for this setup using two t2.micro instances for the VPC endpoints and a t2.small for the MongoDB host. So far in testing I’ve successfully pushed greater than 25mbps through a connection on the t2.micro host without any fuss.

The next step involves a little planning. Consider what parts of the network different user groups will need to access. If everyone will have access to everything in your VPC this is easy but otherwise you’ll need to plan for which subnets people need access to. This could mean splitting things up by teams or maybe just production access accounts versus nonproduction access.

Go to the Users tab. You’ll want to create one or more Organizations for users to be grouped into. For each different group that needs access to different subnets you’ll want different organizations. After that go ahead and put the users in their correct organizations. With Pritunl Enterprise you can use SSO to handle part of this but that’s something to cover another time.

At this point you can set up your first Pritunl sever. In the web interface one one of the hosts go to the Servers tab. There click on the Add Server button and fill out the form. For the UDP port enter 25000 (that we added to the EC2 SG earlier). Make sure to click on the Advanced button and enter the number of replicated hosts you’ll use for the connections. You’ll also want to change the server mode to Local Traffic Only and specify what subnets the VPN server should give access to. Once satisfied with the config click Add and then watch the UI. The server will generate it’s DH parameters in the background. If you’ve selected parameters more complex than the default 1536 it’s going to take a little bit to finish. I’d recommend you use at least 2048.

You can start making additional servers for each set of subnets people might need to access. Associate the organizations to each based on their needs.

Once that’s all done and all of the DH parameters have been generated you can start the servers. After they’ve started you can download the credentials for your user and confirm that the VPN responds as it should.

You should now be all set; distribute the credentials needed to all of your users and enjoy.

Below is the script I run on the Mongo box to back it up each night and drop the results into S3.

Bring on the code

I suppose it’s now been two years since I last noted I was going to be updating things. If only I had known then what was coming… ah well.

I’ve been developing and writing code in a devops role for a while now and unfortunately this role came along with some restrictions in what I could share and post online. It has been a great time and has helped me grow significantly both as an engineer and as a developer. I’ve worked with some brilliant people and made some great friends.

That chapter is now coming to a close and it’s time for something new. I’m moving to a new role with new challenges and in a completely different market. A welcome change, along with the challenges, is the ability to contribute and share once again. As I find, modify, or create new tools or toys they should start appearing here rather sitting quietly in ~/code/ on my machine. I can’t wait to show you some of the neat things that have been lurking in that folder.


Bring on the code.

Quick update

A few people have asked what’s been going on since I’ve been quiet for a couple months.¬† I’ve been very busy and haven’t gotten any new full releases finished but the two big ones I’m working on now are, well, big.

The one that will probably be ready sooner is a massive bash script I’ve been working on for some time and the other is a php project I’ve just recently started.¬† Both are functional but not completely bug-free or feature complete just yet.¬† The curse of developing in free time.

Stay tuned.

Batch Script: Purge Reader

Going along with the batch theme this one is designed to take Reader / Acrobat off of the target system.  If I missed any GUIDs or you have any suggestions please feel free to email or comment here

-Nathan V

More Information:

© 2018 Secure() All Rights Reserved