Announcing the new ISDA.org

I’m the Director of Web Development at ISDA, a trade organization whose goal is to make the global derivatives markets safer and more efficient. I started there a little over 6 years ago, just after they had launched a new site.

The design was contemporary for its time. The codebase was built on top of a custom CMS developed by three separate development firms in the Ukraine. There were some interesting design decisions in the codebase, and a lot of hacks that had been implemented to “just make it work.” My first task on the job was to figure out how the system worked and add features. At the time, I was the entire web development team.

Over the past six years, as I rolled out more projects, I was given the opportunity to build a web team. First there was just me, then I added three more developers, now we’re six plus a content strategist.

My Moby Dick, the elusive goal, was to rebuild our main site site on top of an open-source platform. I’m a fan of open-source, and I was finally given the green-light this year.

These are the fruits of my team’s labors: The new ISDA.org design.

This project is basically a rebuild of all the projects I’ve developed and/or presided over while at ISDA in the past 6 years, crammed into 10 months. We did this while maintaining and developing crucial features for the old site. We also incorporated pieces into the site that had never been moved over from the older site — the asp (no .net) site from before my time.

The analogy I’ve been using to describe this launch was to take a train traveling at speed and switch it with a new train, while in motion, and making sure the people remain traveling during the process. A good friend, who’s a project manager, uses a similar analogy, of laying down tracks before a train traveling full-speed. I think the latter description is apt for where we are now, post launch.

My personal goal was to bring consistency to the site. The design had been hacked in so many different ways to do things it had never been designed to do. The same went for the codebase. It was getting harder and harder to fulfill requests for features. As we developed further, the codebase resembled an Italian pasta dinner more and more.

In this post, I want to outline some of the features from the site:

The site has three separate e-commerce checkouts. There’s a good reason to keep them separate. In the old site their code-base was completely separate as well, and we have brought together as much of code-base as we could.

We completely restructured the content, and how users interact with the content. Most of what you see now as individual pieces of content were rows in tables on pages. When content needed to be in two tables it was replicated there, and over time grew out of sync. The tables weren’t searchable, nor could you promote content from the tables.

We changed all that, liberating the content from the tables, and restructured the content into categories across all the content cross-referencing the content using tags. I have a very smart content strategist who worked with all our departments to reorganize thousands of pieces of content.

We’ve incorporated ReactJS searches and modules throughout the site. It’s a new technology for my team, so it meant training while we built. Mentoring is one of my favorite parts of my job, so that was a pleasure. It was important for me to use a JavaScript framework; when you don’t have any, it’s nearly impossible to not end up with a mess of code. I’ve seen it happen with the best intentions. The only alternative is to spin your own framework, but then you have your own framework to architect and maintain.

We have over 30 different forms, each with drastically different requirements. Like the e-commerce, we developed modules that we could share across the forms, so the customization was more a process of definitions than development. Now when we need a new form, or to expand the existing one, it’ll be easy.

The slick design was originally commissioned. As my team has a wealth of design skills already, we used their design as a base for our needs and built from there. The illustrations are also specially commissioned. We wanted to give our new site a consistent high-quality look and feel.

We also, finally, became responsive. I know, hello 2012! That’s what happens whenever you launch something: Immediately some of your technology becomes ancient. The last site had launched in 2011, as responsive design was emerging. So another goal for us was to make the site as future-proof as possible.

There’s a lot more to do with the site, there are features to add, and others to complete. But it’s live, a great feat in itself, and fully functional. As I always say: make it work, then make it work well.

In future posts I’ll dig into our codebase and share some of the solutions we came up with and things we learned. There’s a lot of great solutions to mine from our new codebase and a big part of the open-source community is about sharing. We may release plugins from components we built, but the very least, we’ll be sharing solutions to problems we encountered.

Play To Your Strengths (The Leader’s Serenity Prayer)

Originally published on forbes.com.

The role of stretching is often overlooked in the process of growth.

When you’re strength training and you lift weights, it stresses your muscles, which triggers the growth process. But if you neglect to stretch afterward, your muscles shorten and become tight, which leads to them becoming weaker, not stronger, and causes damage to your joints and muscles.

When leading, it is inevitable that your teams will be stressed at one point or another. Leveraging your team through those stresses, and stretching properly between stresses, is as important to your team’s success as stressing your muscles and stretching them are in strength training.

There is a lot of literature about how to move away from leading while you’re perpetually under stress. I’m not going to talk about that here. I’d like to discuss how to get the most out of your team when you are functioning under stress and how to stretch in between stresses.

When I’m in stretch mode, that’s when I explore — be it a new technology or way of running things — and let my team explore. To be completely honest, this is what I do with most of my evenings and weekends as well. Exploring doesn’t mean wasting time. Rather, it means giving people the opportunity to get out of their comfort zones, which, as they say, is where all the good stuff happens.

By stretching, I was able to implement version control with our codebase (you have to start somewhere). I was able to build private development environments instead of having our developers share a development server. I updated our servers to run better, faster and more secure infrastructure. As a team, we’ve trained in and implemented all the best parts of ES6 and PHP7+. We’ve explored better project management processes. We’ve waged a battle on spaghetti code, implemented unit testing and added new frameworks like ReactJS.

If we had allowed ourselves to be pushed into perpetual stress, we wouldn’t have any of this, and our company would be worse off for it.

When you can stretch, that is the time to for encouraging growth, exploring new open source libraries, and making sure all your foundations are solid. Does everyone on your team know how classes work in ES6? This is what you can look forward to when you start your campaign to move away from living in a state where you’re perpetually chasing the urgent issues.

Then there’s the stress…

When you’re in stress mode, your team will grow, but your team needs to perform at its highest levels and play to its strengths. All hands on deck.

What do I mean by that?

If one member of your team is much better at CSS — even if you need all your team members to be proficient in CSS — now is the time to rely on that team member specifically for all your CSS needs. Now is not the time to cultivate weak or latent skills.

During your stressful periods, you’ll see that some of your team members grow into roles. But just as you don’t try a new powerlifting stance when you’re competing to break a record at the gym, you shouldn’t be seeking out hidden talents when you’re in stress mode at work. If these talents reveal themselves, so be it — otherwise, tap from the existing wells.

A React + Redux WordPress Theme — Version 2.0

It’s that time of year again. I’ve updated this theme, which runs off my open theme + a few minor style tweaks.

What’s new in v2.0?

It’s all completely under the hood. So you’ll see nothing different here… But I’ll know.

First, since I build the theme, react-router hit version 4 and changed everything. There are a whole lot of changes in this theme due to that. One huge benefit to this update is that now it is much easier to integrate state into your redux flow. You can see how to implement react router 4 in the index.js here.

Also, since building the original I started using ReactJS at work, and learned a lot about how things flow, and how to keep things clean. So many of the changes have to do with duplicate code, and cleaning up the flow of data. There’s still much that can be done, but it’s a nice step in the right direction.

Bootstrap finally went from alpha to beta, so I’ve implemented that, and a few minor tweaks to make everything work. I almost lost the header for a minute…

Another piece I started implementing is a proper php fallback. When I first build the theme I thought it would be cool for it not to rely on php at all. But let’s be real, sometimes there are browsers that hiccup, and people don’t use the tools they should, so I thought it would be prudent to make the simplest functioning theme in php to fall back to. It’s not done, so for now they’ll see a list of titles…

Go check it out and I’d love to hear your thoughts.

As always, questions? comments? shoot me a line!

Open Source: To Use Or Not To Use (And How To Choose)

Originally published on forbes.com.

You’d like to use open source software, but you’re not sure what criteria you should use when deciding whether to rely on it for a specific project or not.

I have a long, complicated history with open source software. I use open source libraries every day in my work, and I’ve developed several criteria for evaluating projects.

I got my professional start in tech as the technical co-founder of a news startup. My co-founder had chosen to use WordPress. I volunteered to maintain the tech since I had a strong background in HTML and high school-level understanding of Pascal and figured that would be enough.

WordPress, as it is for many, was my gateway drug into tech — and open source. When I had any questions, I quickly found solutions from the robust and supportive WordPress community.

That’s the ideal open source — a ubiquitous platform with a helpful and supportive community.

Several years later, at another startup, I got entangled with a different open source community. Having used WordPress during my initial experience with open source, I trusted open source. I was tasked with managing an online community, and I chose this open source solution without fully understanding what had made the WordPress choice so successful with my previous endeavor.

Customizations were expensive, if they were even possible. Updates didn’t update smoothly, nor were they timely. The community using and supporting the software was very small at the time and not very helpful.

That’s where open source can go wrong — a project that’s not properly maintained or supported with an unhelpful community. Even when open source goes wrong, you are not necessarily going to do better with a proprietary solution.

Many development firms try to sell their proprietary systems so they can lock in clients. Every new feature the client needs costs an additional work fee. That’s clearly in the interest of the development firm — not the client.

If you opt out of working with them for future developments, then you are responsible for developing the core codebase. You have to constantly monitor the core codebase for security risks because you have purchased a proprietary system. You are on your own.

With open source systems, you get all the infrastructure for your site from the community. The same goes for Flask, Drupal or ExpressJS — all projects I’ve leveraged at one time or another. User management, community plugins, security and data structures are all taken care of, leaving you to focus on the features your company needs.

How To Vet An Open Source Project

Knowing which open source projects you can rely on is an acquired skill. If you choose wrong, it will cost you. I’ve thought a lot about this topic over the years and have come up with the following criteria for evaluating a project:

1. Who is developing and maintaining the project?

Does the company have a good track record for keeping open source projects going? Sometimes a company will open up a tool it uses in-house. This is a good sign that it is likely to keep developing it further. Other companies don’t want to actually kill projects, so they offload them to an open source platform and cease development. You can adopt such a project, but just assume that its maintenance will most likely be entirely on you.

For example, Facebook has a fairly good track record for supporting its open source projects. It has a department dedicated to open source tools. I can’t vouch for its non-open source services, because each one is a different case. But I happily incorporate projects like ReactJS into my site, knowing that it will be maintained.

2. How popular is the project?

The more people rely on the project, the more likely it is to be maintained — if not by the original development team, then by others who need it and take it over.

The popularity of a platform is an argument that can be used against incorporating some open source projects. WordPress powers roughly 28% of the internet; some see that as a security risk. But any systems administrator worth their salt knows how to mask and lock down WordPress. Not to mention, because of its ubiquity, security issues in WordPress are detected and patched quickly. In contrast, if you run a stagnant system, do you really know what security skeletons are hiding in there?

3. How often is it updated?

When a project stops attracting regular contributors, it’s a strong indication that that project is going to die. Similarly, if there are a lot of open issues on their GitHub repository, it means that the team behind the project is neither active nor responsive to the needs of the community.

4. What does the codebase look like?

Code that is clean and well-thought-out is a good indication that professionals are behind the project. Even if it was left to die, it might be a project a company would happily take in-house and maintain further for its needs.

If you are debating whether you can incorporate a project into your codebase, remember the following: A good open source project is maintained by a core group of people who rely upon it. It will likely be used by a lot of people, and it will be updated often. Finally, a good project, open source or not, will have clean code that is well-maintained. If you do incorporate it into your codebase, you will benefit from the expertise of the entire community using that project.

On Assholes and Leaders

“If you run into an asshole in the morning, you ran into an asshole. If you run into assholes all day, you’re the asshole.”

― Raylan Givens, Justified

The asshole doesn’t see that he is one — that is the true nature of being an asshole. Ultimately being one is truly just a manifestation of selfishness. If you don’t care how your actions affect the people around you, the people around you will see you as an asshole.

If the actions of everyone around you are pissing you off, you’re only thinking of yourself. When we start life, we can only think of our own needs. We’re not capable of doing otherwise. As we grow older, our ability to think of the needs of others grows. That’s why kids on the playground can be so brutal. Part of growing up is learning to see past our own needs.

Assholes are the people who never truly grow up.

Caring about how your actions affect the people around you does not make you a pansy, or weak. Sometimes you might know that what you need to do will have adverse effects on people. When that happens, the only way to avoid being an asshole is if you first consider the effects of your actions. At that point, depending on your considerations you may still be an asshole, but you might be a leader.

Therein lies the paradox. To not inadvertently be an asshole, you have to be self-aware enough to know that what you are doing is affecting others adversely. Assholes are insensitive and therefore detestable.

“The measure of a leader is not the number of people who serve him but the number of people he serves.”

– John C. Maxwell

Traditional leadership, as we think of it, is when you’re the boss. You command, and your minions listen. But the average serf doesn’t do great work. They have no reason to do great work. Why should they?

Commanding and expecting it will get done, threatening, pressuring, having no sense of the needs of the person you are asking from — all these are classic actions of an asshole.

The antithesis of being an asshole is being a leader.

A good leader knows that the buck stops with them; that they are ultimately responsible for what needs to get done. From raising the next round, to making sure the servers are running, to sweeping the floor — it is all up to them.

A good leader has to be aware of the state of the entire company. A good leader looks outward to see how they can serve better. Good leaders learn to look past themselves, past the people immediately around them, and to see as much of the big picture as they can.

An asshole stands in front of a subway door, oblivious to the people who can’t get on.

If you want to rise above being an asshole and become more of a leader, take time to think about the people around you, the people you interact with, and care a little bit. If you do this you’ll start to see people turning to you to get things done.

How to set up a local WordPress Vagrant development environment

Setting up a Vagrant box can be painstaking.

Here is the process:

  1. Install a basic box.
  2. SSH into said box.
  3. Run a command.
  4. If it works, add the command to a provision file.
  5. Destroy your box.
  6. Run the box again and see if the command works via a provisioning file too.
  7. Whether it works, or doesn’t work,  back to step 2 and try a new command or try the same command another way — depending of if it worked or not.

This is a really good way to get to know what your system’s administrator does every day. It includes a lot of reading manuals and playing with configurations.

If you want to understand your server better, there is no better (and safer) way. An added benefit is that doing this will also give you confidence in your development skills, as you’ll understand more of what goes on beneath the surface.

Warning: This process will take you days. At least at first.

VVV is great, if you don’t want to think about what you’re running. Its imprint as a local environment is bit heavy though.

Since I’m partially responsible for running the server at work (together with our security professional), and I run the server that this site (and a few others) is hosted on, I do like to think about my server. I think about it a lot.

Disclosure

Please don’t consider contents from this post as best pratice for running a production server. There’s a whole lot more security and settings involved.

I just updated my local development vagrant box. I thought I’d share what I learned upgrading it so that you don’t have to go through steps 2-7 above 80 gazillion times.

Current Versions:

  • Ubuntu 16.04 (Xenial)
  • Python 3
  • Nginx 1.13
  • PHP 7.1
  • Percona 5.7
  • NVM

How do I get started with Vagrant?

Vagrant automates setting up a server. What this means is that you can clone a git repository with the settings to run a specific environment, type vagrant up and you don’t have to know any more than whether you trust the person who designed that environment.

It’s also free, unless you need to use it with VMware. Which makes it very popular among use developers who love free software.

To get started with Vagrant all you need to do is download the latest version of Vagrant, and VirtualBox. Make a folder somewhere and go to it in your terminal. Then type vagrant init and then vagrant up.

How do I use your repo, I don’t care how it works?

Simple.

  1. Download and install Vagrant and Virtual box.
  2. Clone the github repo
  3. Run vagrant up from the directory.
  4. Add 192.168.33.10    play.lcl to your hosts file. (See “What is a hosts file?” below for details.)
  5. Go to http://play.lcl in your browser.

Why should I run Ubuntu Xenial on my Vagrant box?

Ubuntu is one of the easier, and stable, Linux distributions to maintain. The apt package manager is simple to use and has a great community contributing to its upkeep. In addition, it tends to offering later releases of tools than most other distributions.

One alternative, CentOS, uses yum as their package manager, which is also pretty good, but doesn’t offer as many recent releases of packages as apt does. I’ve used it a lot. One great benefit is that CentOS has an Enterprise edition (Redhat). If you are required by your company to use Enterprise software, so that you can blame someone if something goes wrong, CentOS/Redhat is not a bad way to go.

I’ve had to use SUSE as well, I had to for a while. Pity me.

As of the last time I used it, there was no meaningful package manager. That means that if you want something that’s not already packaged with SUSE you had to hope that someone (who you had to trust) had compiled a version that would work… Or you had to compile it yourself. Not safe or fun.

So let’s start with the box itself. Hashicorp, the creator of Vagrant, provides a basic Ubuntu box for most major releases. Other people release boxes as well, but I like mine clean so that I know what’s on the box when I begin with it.

Your basic install will go like this:  vagrant init ubuntu/xenial64.

This will create your most basic configuration file. I recommend reading that file. It will give you an idea of what you can do with your configurations. If you type vagrant up you’ll have Ubuntu 16.04 running.

Ubuntu alone, though, won’t help you with your development much. You’ll need git to get other tools, you’ll need a server to serve files to your browser, some compiling language to run your code and serve to your file server, a database… But a basic OS is a good start.

What is a hosts file? How to I run my local site from a URL and not an IP address?

When your browser tries to load a domain it goes out to a DNS (Domain Name Server) which is a database of domains that point to IP addresses, then your browser goes to that ip address to load the content from the server that hosts the site you are looking for.

Before it goes out to the DNS, your broswer checks a file on your computer called the hosts file. In your hosts file you can tell it any IP address and a domain, and your browser will go to that IP address when you type the domain into your browser. So if you want to use a domain that doesn’t exist you can edit your hosts file and add the domain you want.

You can also override existing domains this way. This can be used nefariously, as I’m sure you can imagine. But it can also be helpful. I’ve used this several times when I wanted to set up a new server. I pointed my local hosts file to the new server’s IP address and set everything up there. Once it was all good, I told my domain hosting provider the new location of my site. BOOM. That’s it.

So if you want to develop locally on google.com you can point google.com in your hosts file to your virtual machine’s local IP address. But it will get confusing if you need to look something up, you won’t be able to access the real google.com until you change it back.

Note: If you play around with this some browsers (ahem Chrome) cache the ip addresses and even if you change it back in your hosts file you need to flush various caches of your browser to access the proper servers again.

I personally like to use something like play.lcl. I’d recommend to use a TLD (top level domain, i.e. .com/.net) that doesn’t exist so you never try to reach a site that you’ve overridden the domain locally.

In order for your hosts file to work, though, you need to point the local domain to an IP address. You can start up your vagrant server, SSH in, run ifconfig and get your IP address that way. But if you reload your local server that ip address might change, and will from time to time.

Enter the static ip: config.vm.network "private_network", ip: "192.168.42.10"

If you add this line to your Vagrantfile, you’re telling Vagrant to always set up the server to use that IP address. You can choose anything in the private address spaces. What this will do is make sure your box runs of that same IP address each time. So when you reload your box, or shut it down for a week, it will be the same IP address when you vagrant up again.

Note: If you are running multiple boxes simultaneously, make sure you use a different IP address, or shut down the other boxes.

Now edit your hosts file and add the line: 192.168.33.10   play.lcl

I like to use Gas Mask to manage my hosts file on my mac.

Once your server is up and running you’ll be able to access it in your browser from the URL http://play.lcl. Sometimes the first time you type it in you need to add the http:// otherwise your browser will search for the domain in your default browser instead of looking it up in the hosts file.

How do I not lose my code files when I destroy my Vagrant box?

One of the benefits of Vagrant is the built-in functionality for synchronizing folders. Back before I was using Vagrant I had to jump through hoops to back up my files on my local environment, now it’s baked-in.

In your Vagrant file add config.vm.synced_folder "./html", "/var/www/html", create: true

This will synchronize the files in the html on your computer to /var/www/html. Which is usually where servers run their code from. If the folder doesn’t exist, it will create it for you.

How do I customize the amount of memory my vagrant box has?

If you’re running a local server with programs that need more than a default amount of memory, or less for that matter, you can add the following to your Vagrantfile…

config.vm.provider "virtualbox" do |vb|
 vb.memory = "1024"
 end

This will set aside 1gb of your computer’s memory for your box.

How do I tell Vagrant to run install scripts when it starts up?

There are two ways to do this:
The first is inline.

config.vm.provision "shell", inline: <<-SHELL
  # run some code here like...
  apt-get update
SHELL

The other way is in separate files.

config.vm.provision :shell, path: "assets/clean-update.sh"

What this does is let you keep all your code neat and easy to find. You won’t have to sort through files that are hundreds of lines long to reconfigure one little thing.

Why does my Vagrant box break when I upgrade everything?

I like to keep things up-to-date. It’s a good way to protect yourself from security issues, or use the latest and greatest features. It’s also a good way to break your code or server. But if you’re running a local server, then there’s no problem there. Test it locally, if it works, apply on production. The beauty of a local server.

Here’s how you play:

  • Copy the entire vagrant folder somewhere else.
  • Halt all other boxes.
  • Run vagrant up in the new copy.
  • Play…
apt-get clean all
apt-get update
apt-get -y upgrade --with-new-pkgs
apt-get -y dist-upgrade
apt-get -y autoremove

This updates and cleans up pretty much everything. Some of these commands are redundant, I have them in there so I can comment one or the other out and provision.

The problem with many base Vagrant base boxes is if you run this, you’re likely to destroy some local configuration and your box won’t continue to run as it had been. That’s why I like using the basic boxes as my base… vagrant init ubuntu/xenial64. Other people/companies provide basic preconfigured boxes, but if you want to play around with Linux configurations you will want to start with something clean.

Note: If you’ve cloned this repo and are using it locally to develop your code you don’t want to run vagrant reload --provision without first testing your code elsewhere.

How do I install the latest version of Git on Ubuntu Xenial?

If you’re going to install other packages on your box you’re going to want Git to get started. By default Ubuntu doesn’t come with the latest version of Git. It’s pretty recent, but not the latest. If you want the latest you can use the ppa for git-core.

apt-add-repository ppa:git-core/ppa
apt-get update
apt-get install -y git

You’ll notice no sudo that’s because vagrant provision files by default will run as a sudoer. If you don’t want it to, like if you’re installing node packages, you’ll need to add privileged:false  to the end of the line in your Vagrantfile, like so:

config.vm.provision :shell, path: "assets/node.sh", privileged: false

Why isn’t Python 3 the default Python for Ubuntu?

Python 3 is a pretty awesome update over Python 2. There are lot of new things under the hood, but a lot of other features athat are not100% backwards compatible. Since so many tools have been built in python 2, most default installations are python 2. Nonetheless, the transition is on the roadmap for future releases of Ubuntu. If you want it now, you can easily run python 3 on your sever.

apt-get install -y python3-software-properties python3-pip python3-dev

As in life, you really only need to know what you’re looking for in order to find the answer. There are versions of most of the tools for python with a 3 appended to the install command.

How do I install the latest version of Nginx on Ubuntu Xenial?

First, why nginx?

Nginx is ridiculously fast at serving static files. It was built from the ground up to do so, and at its inception was basically trying to solve the issues that apache had.

If you do go with nginx, you’ll need to run something for your dynamic content — like PHP. This isn’t a big deal, you can run php-fpm and you’re good to go. PHP traditionally is a module running on top of Apache. PHP-fpm runs along side the server instead of on top of it. The benefit of doing this is that php it will make a smaller imprint on your resources.

Ok, how do I install it?

Like git, the latest nginx does not come with apt out of the box. But Chris Lea has compiled it for you all to use.

add-apt-repository -y ppa:chris-lea/nginx-devel
apt-get update
apt-get install -y nginx

How do I install a self-signed cert on Vagrant? How do I get ssl/https on Vagrant?

With Google giving preference to ssl secured sites it’s a good idea to be able to develop and test your code in similar circumstances. You can automatically create a self-signed ssl certificate like so:

mkdir -p /etc/pki/ssl
cd /etc/pki/ssl
openssl genrsa -out play.lcl.key 2048
openssl req -new -x509 -key play.lcl.key -out play.lcl.cert -days 3650 -subj /CN=play.lcl

Then you can add the following to your nginx site .conf file:

listen 443 ssl;
ssl_certificate /etc/pki/ssl/play.lcl.cert;
ssl_certificate_key /etc/pki/ssl/play.lcl.key;

Just keep in mind, your browser is smarter than that, you’ll have to disable and affirm your kitchen sink and first-born child in the settings in order for it to load your local site over ssl. This is fine, because, do you really want insecure certs easily circumvented? Think of the non-technical people in your life…

How do I install php-fpm on my Vagrant box?

Now that you have nginx running you can’t install the apache module for PHP and expect it to run. Ondřej Surý provides a ppa for php-fpm’s latest release.

add-apt-repository -y ppa:ondrej/php
apt-get update
apt-get install -y php7.1 php7.1-bcmath php7.1-cli php7.1-common php7.1-curl php7.1-dev php7.1-fpm php7.1-gd php7.1-json php7.1-mbstring php7.1-mcrypt php7.1-mysql php7.1-tidy php7.1-xml php7.1-xmlrpc php7.1-zip

This line above will give you everything under the sun along with php, as well as a kitchen sink. Which is fine for a development server. When it comes to a production server it’s best to install the minimum and add only what you need. Otherwise you’ll have a lot to keep an eye on when you audit your sever and code for security issues.

What is Percona and why would I use it instead of MySQL?

From their site

Percona Server for MySQL® is a free, fully compatible, enhanced, open source drop-in replacement for MySQL that provides superior performance, scalability and instrumentation.

I met some of the good people who work there. Basically the back story is thus. A number of developers working on building MySQL were frustrated with the performance of MySQL and frustrated that the company was not implementing their ideas. So they forked it and improved it.

If you were to benchmark it you can confirm that it truly is superior.

How do I install Percona on a Vagrant box?

This is the tricky part. Percona has some configurations that you need to enter as you’re installing it. But you don’t want to have to do that if you’re running a vagrant box. I would vehemently recommend against most of the practices you need to automate these installations when setting up a production server. But for the convenience of a local development server, here you go.

cd ~/
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
apt-get -y update

echo "percona-server-server-5.7 mysql-server/root_password password root" | debconf-set-selections
echo "percona-server-server-5.7 mysql-server/root_password_again password root" | debconf-set-selections
DEBIAN_FRONTEND=noninteractive apt-get -y install percona-server-server-5.7

This will download the package to your home folder, add it to apt then update apt. Next you’re setting the response to the configuration. Finally, you’re installing the Percona server with the noninteractive flag.

Note: This one took a long while to figure out, so thank me.

If you follow the installation script while it’s happening you’ll notice that there are commands you need to run to get the full benefits of Percona. Here’s how to run them in your provisioning script.

# restart after reconfig
service mysql restart
mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so';"
mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so';"
mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so';"
sudo mysql -D mysql -e"update user set plugin='mysql_native_password';"
sudo mysql -D mysql -e"flush privileges;"

How do I install NVM on my Vagrant box?

This is another one that took some time to figure out.

What is NVM?

NVM stands for Node Version Manager. It’s a really simple tool for installing and jumping in between different versions of node. If you want to use the latest version of node you only type nvm install node then nvm use node and boom. You’re using the latest node. You can also specify a version. This was more helpful when Node had been forked and you wanted to test IOJS. Node has matured somewhat since then, and there’s less of a need. But it’s still helpful for building packages or troubleshooting.

cd ~/
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
source ~/.nvm/nvm.sh
echo "source ~/.nvm/nvm.sh" >> ~/.bashrc
nvm install node
nvm use node

This will run the nvm install script, then it loads it into your bash profile so that you can use it immediately in your script.

When installing node packages you should make sure to install them using the Unix user that you’ll be developing with. So when I run this file I run it set privileged: false, as I explained above.

How do I automatically install WordPress on my Vagrant box?

You have web server (nginx) a PHP compiler (php-fpm) and a database (percona) running. You’re missing WordPress. Enter wp-cli.

wp-cli is a command-line interface for maintaining WordPress. With it, you can install and update WordPress core, plugins, and themes.

Let’s start my installing wp-cli…

cd ~/
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp
wp --info

This will install wp-cli on your box. Again, so that you can use it with your vagrant user run this in a non-privileged provision.

Next install WordPress:

cd /var/www/html
wp core download
wp core config --dbname=play_lcl --dbuser=root --dbpass= --dbhost=localhost
wp db create play_lcl
wp core install --url=play.lcl --title=Playground --admin_user=admin --admin_password=admin [email protected]
wp theme update --all
wp plugin update --all

This downloads the core to your html folder, yes, the one we synchronised before. It then creates the configuration file. Next it installs WordPress with the user: admin and password: admin. Finally, it updates the plugins and themes.

You no longer have to log into your dashboard and click update on all your plugins. Now one simple command does the trick.

In summary

If you’ve stuck around till here, thanks! Tweet: “Ooogaah Boogah” @jackreichert to let me know and show your appreciation…

Well that was a long brain dump. I update this project from time to time. I probably won’t go back to this blog post and update it. But I will push my changes. So you can star the repo for future reference.

If you have ideas on how to make this better, please feel free to comment below, or submit your thoughts on my contact page or even open an issue in the repo.

Debugging your code in WordPress: Tools of the Trade

As this is going live I am currently giving a talk at @WPNYC the WordPress New York Meetup group.

There are two slides I skimmed over due to lack of time, but they are  an essential part of the talk. In order to fulfill my promise to provide the complete story I am publishing this post with the full story.

Warning, this blogpost is a bit of a braindump. It is intended as notes to use to explore further. If you need something clarified, don’t heistate to comment.

The first slide is titled:

Tools of the Trade

With the following content:

  • Separate development environment
  • console.log() / error_log() / var_dump($var); die();
  • debug_backtrace() / Xdebug
  • Inspect Element / Developer Tools
  • A good IDE
  • Simulators
  • Unit Testing

Let’s dig in…

Separate Development Environment

Never go commando, period. There are always exceptions to the rule — when you cannot reproduce a bug on your development server while it’s glaring at you from production — but you should avoid coding live on production at all costs. Sooner or later you’ll break something and your site will be down, and it was perfectly preventable.

This is as important as not ever modifying core WordPress files.

There are many ways you can implement a development server. Keep in mind that the closer your development environment is to production in how it functions, the fewer bugs you’ll have due to discrepancies in implementation of your code.

  • XAMPP, MAMP and WAMP are all easy to use out of the box server solutions. I started out with them myself.
  • Vagrant is a great way you can channel your inner sysadmin. But you don’t need to be one to use Vagrant. The good people at 10up support VVV. Or you can use the vagrant box one I built and use.
  • Get another account on the server you are hosting your live site on. It’ll be worth the money, just make sure it’s not accesssible from the “outside.” Member’s only plugins can help with that. Or use the htaccess or nginx.conf files to limit access by ip address.

console.log() / error_log() / <pre>var_dump($var); die();

THE key to effective debugging is to know what you’re dealing with. These tools help you peek under the hood, test your assumptions, and understand what’s happening.

  • Console is a VERY effective tool; its most used method is .log but it has a lot more. It’ll help you see what’s going on in your JavaScript code. We’ve come far from using alert().
  • error_log is similar to console, but server-side. Find out where your error logs are written to on your host. A server isn’t worth it’s salt if it doesn’t provide that. If you enjoy console.log check out console_log().
  • While the error_log is cleaner, it limits its output. I find I like to see things in the browser, sometimes. I use this pattern, a lot, when debugging. echo '<pre>'; var_dump($var); die();
    It will “dump” the variable in a preformatted html tag, then die().

debug_backtrace() / Xdebug

Xdebug provides a whole suite of extended debugging methods. You don’t need to start here, but if you’re looking to up your game, here’s where to go.

Inspect Element / Developer Tools

Volumes of ink has been used talking about devtools, so I won’t go in depth here. devtools-window.png

Firebug changed our lives. Read up on it, use it. Personally, I enjoy designing in it too. It’s not just for debugging.

I used devtoops to build this mockup to nag @Dropbox to request a feature.

A good IDE

I use PHPStorm. A lot of people swear by Sublime. One thing I like about PHPStorm is that it has deep integration with WordPress.

The reason why this is essential is that it does a lot in the background to ensure good code.

  • It can autoformat, which means you’ll reduce your syntactical bugs.
  • Deep link to method sources, so you can more easily see what’s happening under the hood.
  • Code completion will help you make sure you’re using the right method, and give you a reason to should “DAMN YOU AUTOCORRECT” every so often.

Simulators

If you’re expected to support a specific device, you better test in it. Unfortunatly we can’t all afford every device under the sun, but there are a lot of tools out there to help us with that.

  • Modern.ie provides virtual machines for testing different versions of Internet Exploreer.
  • Use a mobile emulator sometimes making your screen narrower won’t cut it.

Unit Testing

This is the single most important thing you can do to prevent bugs. It’s hard to get started. It’s overwhelming. But as the proverb says: the best time to plant a tree is 20 years ago, the second best time is now.

Protip: Think about testing BEFORE you write your code… Look up TDD.


The second slide is titled:

WordPress Specific Debugging Tools

With the following content:

  • define(‘WP_DEBUG’, true);
  • Debug Bar plugin
  • All the Debug bar addons!!!
  • Style Guides

define(‘WP_DEBUG’, true);

There are a number of constants you can define in your wp-config.php file that can help your debugging in WordPress.

Debug Bar plugin

The Debug Bar is the equivalent to the devtools, but for WordPress. If you enable it you’ll get a window into how WordPress runs that you can’t get easily otherwise.

All the Debug Bar addons!!!

Expanding on the previous tool. There are a ton of plugins that hook into the Debug Bar and extend it.

Style Guides

This isn’t necessarily a WordPress specific tip, but if you’re using PHPStorm it comes with an auto formatting preset of WordPress’ code style.

In any case, it doesn’t matter how you style your code (tabs vs. spaces) but only that you are consistent. If you are consistent with your coding style you’ll prevent a whole lot of bugs. You’ll prevent a whole lot more if you’re working with other people and you make sure you’re all styling your code the same way.

Our brains evolved to see patterns, if your code styling isn’t consistent, you’ll miss details. You’ll have bugs.

Conclusion

I hope this was helpful. There’s a lot I didn’t cover so as to not overwhelm. What’s really most important, which I covered in my talk is the following:

Make it work, then make it work well.

If you do this, your code will always be improving.

How to see if a class was added to an element using JavaScript

If you need to see if a class has been added to an element the easiest way is to trigger a new event when you add the class.

$(this).addClass('someClass');
$(mySelector).trigger('cssClassChanged');

The problem with this solution is what happens if you do not have control over the function where this happens. Like if it happens in the WordPress core, and you know that you should never ever change code in the core.

I needed this for a plugin I was buidling. I know that when I click on something an ajax call is fired, and I want to do something when the response comes back. The ajax call is encapsulated so I can’t hook into it, but when the response comes back it adds a class to an element.

This is what I did:

checkForAddedClass = function (observedElement, className, callback) {
    var count = 0;
    const observer = setInterval(function () {
        count++;
        if (500 < count || $(observedElement).hasClass(className)) {
            clearInterval(observer);
            callback();
        }
    }, 10);
};

When I click on that something, I call this function. It sets an interval that watches the observedElement selector to see if the className was added. If it gets added, or 5 seconds goes by, it runs the callback.

This is how it would be implemented:

checkForAddedClass('.attachment-details', 'save-ready', getUpdatedSettings);

Hope this helps if you’re dealing with a similar issue.