Blog

How to download a gif from Giphy

Recently, when I was trying to download a gif file from Giphy, I noticed that when I went directly to the file, https://media.giphy.com/media/14kdiJUblbWBXy/giphy.gif for example, that it was actually loading a web page instead of the gif file.

Now, on this page, you could choose to alternative/right click on the image and then click “Save Image”. But, this will download the image with a .webp extension. From there, you can choose to change the extension to .gif if you’d like. But, I’ll be honest and tell you that I didn’t consider switching the extension at first. So, I dug further.

I figured that Giphy was probably detecting that based on who/where the request was coming from, so I tried downloading the gif file by running a cURL command. This worked, but it’s not convenient to have to open up a terminal window to run a cURL command.

Luckily, a kind person left a very helpful comment below with an even simpler approach, which I think is the simplest approach overall.

Simple approach

When we go to a standard Giphy source URL, like https://media.giphy.com/media/14kdiJUblbWBXy/giphy.gif, a web page is loaded instead of the gif that we want. Now, the only thing we have to change for the actual gif to load is to change media.giphy.com to i.giphy.com.

So, if we take the above example, we could load the actual gif by going to https://i.giphy.com/media/14kdiJUblbWBXy/giphy.gif

From here, we can alternate/right click to download the gif with the correct extension and go on about our day.

Downloading via cURL

curl https://media.giphy.com/media/KXgJsSeOfvSgg/giphy.gif --output ~/Desktop/download.gif

This resulted in the actual gif file that I wanted being placed on my Desktop as download.gif.

Recursively cast to array in PHP

I recently ran into an issue where JSON encoding some objects in my code wasn’t working properly. After experimenting, I realized that casting everything to an array before JSON encoding magically fixed things. 

Casting an object to an array is simple enough:

$variable_to_array = (array) $object_var;

But, what happens when an object or array contains references to other objects or arrays? The answer is that we then need to recursively cast a given input to an array. But, we don’t necessarily want to recursively cast everything to an array. For example, this is what happens when we cast 1 to an array:

return (array) 1;
=> array(1) {
  [0]=>
  int(1)
}

A simple fix is to recursively cast non-scalar values to an array. Here’s an example of how we would do that:

/**
 * Given mixed input, will recursively cast to an array if the input is an array or object.
 *
 * @param mixed $input Any input to possibly cast to array.
 * @return mixed
 */ 
function recursive_cast_to_array( $input ) {
	if ( is_scalar( $input ) ) {
		return $input;
	}

	return array_map( 'recursive_cast_to_array', (array) $input );
}

How to remove files not tracked in SVN

At Automattic, we use SVN and Phabricator for much of our source control needs. One issue that I often run into is a warning about untracked files when creating a Phabricator differential:

You have untracked files in this working copy.

  Working copy: ~/public_html

  Untracked changes in working copy:
  (To ignore this change, add it to "svn:ignore".)
    test.txt

    Ignore this untracked file and continue? [y/N]

This warning’s purpose is to make sure that the differential being created has ALL of the changes so that a file isn’t forgotten when a commit is made. 

But, what if the untracked file(s) are from previously checking out and testing a patch? In that case, this warning is actually a bit annoying. 

The simple fix is to clear out the file(s) that aren’t tracked by SVN, which is as simple as deleting the file(s) since they’re not tracked in SVN. For a single file, that might look like:

rm test.txt

But, what if there are dozens or hundreds of files? I know I certainly wouldn’t want to run the command above dozens or hundreds of times to remove all of the files that aren’t tracked in SVN. Of course, we can automate all of the work by running something like the following ONCE:

svn st | grep '^?' | awk '{print }' | xargs rm -rf

Simply run the above from the root of the project and the untracked files should be removed. The above command is a bit much, so I’d recommend throwing it in an alias, which would look something like this:

alias clearuntracked='svn st | grep '\''^?'\'' | awk '\''{print }'\'' | xargs rm -rf'

Get unique values in file with shell command

Over the past year, there have been a couple of times where I've needed to sort some large list of values, more than 100 million lines in one case. 

In each case, I was dealing with a data source where there was surely duplicate entries. For example, duplicate usernames, emails, or URLs. To address this, I decided to get the unique values from the file before I ran a final processing script over them. This would require sorting all of the values in the given file and then deduping in the resulting groups of values.

This sorting and deduping can be a bit challenging. There are various algorithms to consider and if the dataset is large enough, we also need to ensure that we're handling the data in a way that we don't run out of memory. 

Shell commands to the rescue 🙂

Luckily, there are shell commands that make it quite simple to get the unique values in a file. Here's what I ended up using to get the unique values in a file:

cat $file | sort | uniq

In this example, we are:

  • Opening the file at $file
  • Sorting the file so that duplicates end up in a contiguous block
  • Dedupe so that only one value remains from each contiguous block

Here's another example of this command with piped input:

php -r 'for ( $i = 0; $i < 1000000; $i++ ) { echo sprintf( "%d\n", random_int( 0, 100 ) ); }' | sort -n | uniq

In this example, we are

  • Generating 1,000,000 million random numbers, between 0 and 1,000) on their own lines
  • Sorting that output so that like numbers are together
    • Note that we're using -n here to do an integer sort.
  • Deduping that so that we end up with a unique number on each line

If we wanted know how often each number occurred in the file, we could simple add -c to the end of the command above. The resulting command would be php -r 'for ( $i = 0; $i < 1000000; $i++ ) { echo sprintf( "%d\n", random_int( 0, 100 ) ); }' | sort -n | uniq -c and we would get some output that looked like this:

9880 0
10179 1
9725 2
10024 3
9921 4
9893 5
9945 6
9881 7
9707 8
9955 9
9896 10
9845 11
9928 12
10024 13
10005 14
9834 15
9929 16
9764 17
9795 18
9932 19
9735 20
10082 21
9876 22
9835 23
9748 24
9947 25
9975 26
9841 27
9856 28
9751 29
10138 30
10037 31
10026 32
10128 33
9926 34
9821 35
9990 36
9920 37
9696 38
9886 39
9896 40
9815 41
9924 42
9739 43
9854 44
9936 45
9977 46
9873 47
9824 48
10043 49
10054 50
9870 51
9783 52
9901 53
9819 54
9882 55
10022 56
9899 57
9922 58
9922 59
9902 60
10036 61
9830 62
9792 63
9894 64
10008 65
9774 66
9918 67
9986 68
9814 69
9661 70
10117 71
10046 72
9704 73
10016 74
9601 75
9901 76
9923 77
9931 78
9909 79
9895 80
9771 81
10044 82
10059 83
9864 84
9938 85
9799 86
10006 87
9883 88
9880 89
9837 90
9701 91
9870 92
9998 93
9809 94
9883 95
10144 96
9935 97
9979 98
9922 99
9789 100

What is the JavaScript event loop?

I remember the first time I saw a setTimeout( fn, 0 ) call in some React. Luckily there was a comment with the code, so I kind of had an idea of why that code was there. Even with the comment though, it was still confusing. 

Since then, I’ve read several articles about the event loop and got to a point where I was fairly comfortable with my understanding. But, after watching this JSConf talk by Philip Roberts, I feel like I’ve got a much better understanding.

In the talk, Philip uses a slowed down demonstration of the event loop to explain what’s going on to his audience. Philip also demonstrates a tool that he built which allows users to type in code and visualize all of the parts that make JavaScript asynchronous actions work.

You can check out the tool at http://latentflip.com/loupe, but I’d recommend doing it after watching the video.

How to install Unison 2.48 on Ubuntu

For developing on remote servers, but using a local IDE, I prefer to use Unison over other methods that rely on syncing files via rsync or SFTP.

But, one issue with Unison is that two computers must have the same version to sync. And since Homebrew installs Unison 2.48.4 and apt-get install unison installs something like 2.0.x, this meant I couldn’t sync between my computer and a development machine if I wanted to install Unison via apt-get

No worries, by following the documentation, and a bit more searching, I was able to figure out how to build Unison 2.48.4 on my development server!

Note: I did run into a warning at the end of the build. But, from what I can tell, the build actually succeeded. The second-to-last step below helps you test if the build succeeded.

  • apt-get install ocaml
  • apt-get install make
  • curl -O curl -O https://www.seas.upenn.edu/~bcpierce/unison//download/releases/stable/unison-2.48.4.tar.gz
  • tar -xvzf unison-2.48.4.tar.gz
  • cd src
  • make UISTYLE=text
  • ./unison to make sure it built correctly. You should see something like this:
    Usage: unison [options]
    or unison root1 root2 [options]
    or unison profilename [options]
    
    For a list of options, type "unison -help".
    For a tutorial on basic usage, type "unison -doc tutorial".
    For other documentation, type "unison -doc topics".
    
  • mv unison /usr/local/bin

After going through these commands, unison should be in your path, so you should be able to use unison from any directory without specifying the location of the binary.

24 Hours of Driving for 2.5 Minutes

This past Sunday, I set out on an adventure to go see the total solar eclipse along with my kids and a friend, Scott Sutter. And while we had a successful trip, driving 24 hours over 2.5 days, we did hit a few speed bumps along the way.

Plans cancelled

You see, on August 10th, I finally chose Fairmont, NE as the city we’d watch from and booked a hotel about 2.5 hours away in McPherson, KS. McPherson and Salina were the closest cities I could find reasonably priced hotels. I remember seeing a Motel 6 in Fairmont going for $1,000+…

From, August 11th to August 18th (or so), I kept checking the weather, and it seemed to actually get better. So, I relaxed a bit and mentally prepared to take the kids on a road trip without my wife. 

On August 19th though, Colt West pinged me to let me know the weather was going to be bad in Fairmont, NE on the day of the solar eclipse. Sure enough, when I looked at it, the weather predicted very cloudy conditions and high chances of severe thunderstorm.

Needless to say, our chances of seeing the solar eclipse at this point weren’t very great.

A bit of weather couldn’t stop us

But, since we were planning on heading out at 7 AM the next morning, I immediately started to look into other options.

Thanks to Colt West’s help, we figured that somewhere in the Tennessee valley would be the best option for seeing the total solar eclipse. But, remember how that Motel 6 was $1,000+ 9 days prior? How the hell was I going to find a hotel in or near Tennessee that would be reasonable?!

Thankfully, Colt also suggested Jonesboro, AR as a stopping point, where I was able to find a Best Western for ~$120.

This let us stick with the same plan to leave at 7 AM on August 20th. But, then we were left with one more issue.

Where the heck do we park and view?

After stopping in Jonesboro, AR, we still weren’t 100% sure where we were going to go. We had a good idea that we’d go to Carbondale, MO, but realistically, it all depended on the weather and whether we could find a good place to actually view the solar eclipse.

At some point, we decided that Hopkinsville would be a good spot for weather conditions, so I set upon finding us a place to view. After a short visit to the city’s website, I saw that all public viewing areas had been reserved. Where the heck were we supposed to go then? Surely, larger cities like Nashville, TN would also have similar issues, right?

It wasn’t until I visited Hopkinsville’s Facebook page that it clicked. There was a family with private property that was allowing people to park and view for just $20. One call later, and we had a spot reserved for us in Crofton, KY, about 12 miles north of Hopkinsville and a bit out of the way so we were able to miss much of the traffic!

Day of

Since we were still in Jonesboro, AR on the morning of August 21st, we had roughly a 4 hour drive to get to Crofton, KY, which meant we needed to leave by about 5 AM to allow plenty of time for traffic and stops.

Heading out at 5AM from Jonesboro, AR to see the eclipse this morning
Heading out at 5AM from Jonesboro, AR to see the eclipse this morning

24 Hours for one picture

After all was said and done, we ended up driving nearly 24 hours, 12 hours each way, to see an eclipse for 2.5 minutes, and it was totally worth it! Here’s the picture that I was able to get with my Sony a6000 with 55-210mm lens and a 1.7x teleconverter from Olympus.

Solar Eclipse in Crofton, KY 8-21-2017
Solar Eclipse in Crofton, KY 8-21-2017