Blog

How to install Unison 2.48 on Ubuntu

For developing on remote servers, but using a local IDE, I prefer to use Unison over other methods that rely on syncing files via rsync or SFTP.

But, one issue with Unison is that two computers must have the same version to sync. And since Homebrew installs Unison 2.48.4 and apt-get install unison installs something like 2.0.x, this meant I couldn’t sync between my computer and a development machine if I wanted to install Unison via apt-get

No worries, by following the documentation, and a bit more searching, I was able to figure out how to build Unison 2.48.4 on my development server!

Note: I did run into a warning at the end of the build. But, from what I can tell, the build actually succeeded. The second-to-last step below helps you test if the build succeeded.

  • apt-get install ocaml
  • apt-get install make
  • curl -O curl -O https://www.seas.upenn.edu/~bcpierce/unison//download/releases/stable/unison-2.48.4.tar.gz
  • tar -xvzf unison-2.48.4.tar.gz
  • cd src
  • make UISTYLE=text
  • ./unison to make sure it built correctly. You should see something like this:
    Usage: unison [options]
    or unison root1 root2 [options]
    or unison profilename [options]
    
    For a list of options, type "unison -help".
    For a tutorial on basic usage, type "unison -doc tutorial".
    For other documentation, type "unison -doc topics".
    
  • mv unison /usr/local/bin

After going through these commands, unison should be in your path, so you should be able to use unison from any directory without specifying the location of the binary.

24 Hours of Driving for 2.5 Minutes

This past Sunday, I set out on an adventure to go see the total solar eclipse along with my kids and a friend, Scott Sutter. And while we had a successful trip, driving 24 hours over 2.5 days, we did hit a few speed bumps along the way.

Plans cancelled

You see, on August 10th, I finally chose Fairmont, NE as the city we’d watch from and booked a hotel about 2.5 hours away in McPherson, KS. McPherson and Salina were the closest cities I could find reasonably priced hotels. I remember seeing a Motel 6 in Fairmont going for $1,000+…

From, August 11th to August 18th (or so), I kept checking the weather, and it seemed to actually get better. So, I relaxed a bit and mentally prepared to take the kids on a road trip without my wife. 

On August 19th though, Colt West pinged me to let me know the weather was going to be bad in Fairmont, NE on the day of the solar eclipse. Sure enough, when I looked at it, the weather predicted very cloudy conditions and high chances of severe thunderstorm.

Needless to say, our chances of seeing the solar eclipse at this point weren’t very great.

A bit of weather couldn’t stop us

But, since we were planning on heading out at 7 AM the next morning, I immediately started to look into other options.

Thanks to Colt West’s help, we figured that somewhere in the Tennessee valley would be the best option for seeing the total solar eclipse. But, remember how that Motel 6 was $1,000+ 9 days prior? How the hell was I going to find a hotel in or near Tennessee that would be reasonable?!

Thankfully, Colt also suggested Jonesboro, AR as a stopping point, where I was able to find a Best Western for ~$120.

This let us stick with the same plan to leave at 7 AM on August 20th. But, then we were left with one more issue.

Where the heck do we park and view?

After stopping in Jonesboro, AR, we still weren’t 100% sure where we were going to go. We had a good idea that we’d go to Carbondale, MO, but realistically, it all depended on the weather and whether we could find a good place to actually view the solar eclipse.

At some point, we decided that Hopkinsville would be a good spot for weather conditions, so I set upon finding us a place to view. After a short visit to the city’s website, I saw that all public viewing areas had been reserved. Where the heck were we supposed to go then? Surely, larger cities like Nashville, TN would also have similar issues, right?

It wasn’t until I visited Hopkinsville’s Facebook page that it clicked. There was a family with private property that was allowing people to park and view for just $20. One call later, and we had a spot reserved for us in Crofton, KY, about 12 miles north of Hopkinsville and a bit out of the way so we were able to miss much of the traffic!

Day of

Since we were still in Jonesboro, AR on the morning of August 21st, we had roughly a 4 hour drive to get to Crofton, KY, which meant we needed to leave by about 5 AM to allow plenty of time for traffic and stops.

Heading out at 5AM from Jonesboro, AR to see the eclipse this morning
Heading out at 5AM from Jonesboro, AR to see the eclipse this morning

24 Hours for one picture

After all was said and done, we ended up driving nearly 24 hours, 12 hours each way, to see an eclipse for 2.5 minutes, and it was totally worth it! Here’s the picture that I was able to get with my Sony a6000 with 55-210mm lens and a 1.7x teleconverter from Olympus.

Solar Eclipse in Crofton, KY 8-21-2017
Solar Eclipse in Crofton, KY 8-21-2017

Photos from the MyZeikl Building

While walking around Frankfurt, I found a very interesting building named MyZeikl. Go ahead, Google it. It’s got some interesting architecture.

What struck me the most while there was this mashup of red, metal, and the glass in the ceiling. A close second was the view when looking down from the top floor.

How to apply a filter to an aggregation in Elasticsearch

When using Elasticsearch for reporting efforts, aggregations have been invaluable. Writing my first aggregation was pretty awesome. But, pretty soon after, I needed to figure out a way to run an aggregation over a filtered data set.

As with learning all new things, I was clueless how to do this. Turns out, it’s quite easy. Within a few minutes, I came across some articles that recommended using a top-level query with a filtered argument, which seemed cool because I could just copy my filter up.

That’d look something like:

{
    "query": {
        "filtered": {}
    }
}

But, one of my coworkers pointed out that filtered queries have been deprecated and removed in 5.x. Womp womp. So, the alternative was to just convert the filter to a bool must query.

Here’s an example:

Example

You can find the Shakespeare data set that I’m using, as well as instructions on how to install it here. Using real data and actually running the query seems to help me learn better, so hopefully you’ll find it helpful.

Once you’ve got the data, let’s run a simple aggregation to get the list of unique plays.

GET shakespeare/_search
{
     "aggs": {
      "play_name": {
        "terms": {
          "field": "play_name",
          "size": 200
        }
      },
      "play_count": {
          "cardinality": {
            "field": "play_name"
} } }, "size": 0 }

Based on this query, we can see that there are 36 plays in the dataset, which is one off from what a Google search suggested. I’ll chalk that up to slightly off data perhaps?

Now, if we were to dig through the buckets, we could list out every single play that Shakespeare wrote, without having to iterate over every single doc in the dataset. Pretty cool, eh?

But, what if we wanted to see all plays that Falstaff was a speaker in? We could easily update the query to be something like the following:

GET shakespeare/_search
{
    "query": {
      "bool": {
        "must": {
            "term": {
                "speaker": "FALSTAFF"
} } } }, "aggs": { "play_name": { "terms": { "field": "play_name", "size": 200 } } }, "size": 0 }

In this case, we’ve simply added a top-level query that returns only docs where FALSTAFF is the speaker. Then, we take those docs and run the aggregation. This gives us results like this:

{
   "took": 5,
   "timed_out": false,
   "_shards": {
      "total": 5,
      "successful": 5,
      "failed": 0
   },
   "hits": {
      "total": 1117,
      "max_score": 0,
      "hits": []
   },
   "aggregations": {
      "play_name": {
         "doc_count_error_upper_bound": 0,
         "sum_other_doc_count": 0,
         "buckets": [
            {
               "key": "Henry IV",
               "doc_count": 654
            },
            {
               "key": "Merry Wives of Windsor",
               "doc_count": 463
            }
         ]
      }
   }
}

And based on that, we can see that FALSTAFF was in “Henry IV” and “Merry Wives of Windsor”.

Comments

Feel free to leave a comment below if you have critical feedback or if this helped you!

How to retry Selenium Webdriver tests in Mocha

While working on some functional tests for a hosting provider, I kept running into an issue where the login test was failing due to a 500 error. It appeared as if the site hadn’t been fully provisioned by the time my test was trying to login.

Initially, I attempted adding timeouts to give the installation process more time, but that seemed prone to error as well since the delay was variable. Also, with a timeout, I would’ve had to make the timeout be the longest expected time, and waiting a minute or so in a test suite didn’t seem like a good idea.

Getting it done

You think it’d be a quick fix, right? If this errors, do it again.

Within minutes, I had found a setting in Mocha that allowed retrying a test. So, I happily plugged that in, ran the test suite again, and it failed…

The issue? The JS bindings for Selenium Webdriver work off of promises, so they don’t quite mesh with the built-in test retry logic. And not having dug in to promises much yet, it definitely took me a bit to wrap my head around a solution.

That being said, there are plenty of articles out there that talk about retries with JavaScript promises, which helped bring me up to speed. But, I didn’t find any that were for specifically retrying promises with Selenium Webdriver in a Mocha test suite.

So, I learned from a couple of examples, and came up with a solution that’d work in my Selenium Webdriver Mocha tests.

The Code

You can find a repo with the code and dependencies here, but for convenience, I’m also copying the relevant snippets below:

The retry logic

This function below recursively calls itself, fetching a promise with the test assertions, and decrementing the number of tries each time.

Each time the function is called, a new promise is created. In that promise, we use catch so that we can hook into the errors and decide whether to retry the test or throw the error.

Note: The syntax looks a bit cleaner in ES6 syntax, but I didn’t want to set that up.

var handleRetries = function ( browser, fetchPromise, numRetries ) {
    numRetries = 'undefined' === typeof numRetries
        ? 1
        : numRetries;
    return fetchPromise().catch( function( err ) {
        if ( numRetries > 0 ) {
            return handleRetries( browser, fetchPromise, numRetries - 1 );
        }
        throw err;
    } );
};
The test

The original test, without retries, looked something like this:

test.describe( 'Can fetch URL', function() {
    test.it( 'page contains something', function() {
        var selector = webdriver.By.name( 'ebinnion' ),
            i = 1;
        browser.get( 'https://google.com' );
        return browser.findElement( selector );
    } );
} );

After integrating with the retry logic, it now looks like this:

test.describe( 'Can fetch URL', function() {
    test.it( 'page contains something', function() {
        var selector = webdriver.By.name( 'ebinnion' ),
            i = 1;
        return handleRetries( browser, function() {
            console.log( 'Trying: ' + i++ );
            browser.get( 'https://google.com' );
            return browser.findElement( selector );
        }, 3 );
    } );
} );

Note that the only thing we did different in the test was put the Selenium Webdriver calls (which return a promise) inside a callback that gets called from handleRetries. Putting the calls inside this callback allows us to get a new promise each time we retry.

Comments?

Feel free to leave a comment if you have input or questions. Admittedly, I may not be too much help if it’s a very technical testing question, but I can try.

I’m also glad to accept critical feedback if there’s a better approach. Particular an approach that doesn’t require an external module, although I’m glad to hear of those as well.