Photoshop does this annoying thing where purely vertical gradients have some horizontal variation. Yes, it’s usually only plus or minus one bit of color, but it offends! I’ve battled Photoshop for a while on this, but I just can’t seem to get exactly what I want out of it. So to make a perfect gradient, I decided to write some code. The requirements are simple: given a starting color and a set of deltas, output a perfect gradient.

Here are some quick examples:

4, 1, 0.25
-2.2, -1, -0.3
-1, 0, 1

If we zoom in on example #1, which starts with black (#000000) and has deltas of 4, 1, 0.25, we see the following:

zoomed gradient

The diagram shows the first ten rows of the gradient. The delta values are accumulated with each row, and only the whole part of the resulting color value is used (aka I take the floor of each color bit). So in this example, using the fractional delta of 0.25 results in exactly one additional blue bit every four rows. Ahhh, perfect!

The Code

No need to use some fancy new language, I wrote a simple PHP program to handle commandline input and output a perfect PNG gradient. The interesting part is the function that generates and saves the gradient:

function build_image($filename, $w, $h, $color, $delta) {
  $img = imagecreatetruecolor($w, $h);
  $c = imagecolorallocate($img, $color[0], $color[1], $color[2]);
  $d = $delta;
  for ($y = 0; $y < $h; $y++) {
    imagefilledrectangle($img, 0, $y, $w - 1, $y + 1, $c);
    $c = imagecolorallocate($img,
      clamp(floor($color[0] + $d[0])),
      clamp(floor($color[1] + $d[1])),
      clamp(floor($color[2] + $d[2]))
    $d = array($d[0] + $delta[0], $d[1] + $delta[1], $d[2] + $delta[2]);
  imagepng($img, $filename);

The code is straight forward. First, create the image via imagecreatetruecolor(). Then, starting with the starting color, draw a one pixel tall rectangle for each row of the image. The next row’s color is computed in each iteration by adding the accumulated delta to the starting color. Finally, output the image as a PNG via imagepng() and free the memory. The complete php source can be downloaded here.

Button Time

Once we have our perfect gradient engine in place, it’s time to make some perfect buttons. To achieve the standard glass button look-and-feel, I typically fuse two gradients together: light on the top, dark on the bottom.

Here are the two halves of a pretty red button, along with their starting color and deltas:


And the two commandline invocations of gradient.php to create the gradients:

php gradient.php 100x16 ff8080 -3,-3,-3 top.png
php gradient.php 100x16 d23c3c -3,-3,-3 bottom.png

If I want my buttons to be sexy, rounded corners are a must. My favorite photoshop trick to create multiple rounded buttons is to use a rounded alpha-transparent button with each gradient as a clipping mask. Using a clipping mask is a simple way to guarantee button geometry remains fixed while colors are changed.

Here is the layers pane showing the two gradients fused together and used as a clipping mask for the rounded alpha-transparent button:

clip mask

The result is a horizontally stretchable gradient button, that doesn’t look half bad. See for yourself:


Custom UIButton

The final button asset can be used as desired, but here is a simple Objective-C example since I’ve been in iPhone world lately:

UIButton *btn = [UIButton buttonWithType:UIButtonTypeCustom];
[btn setFrame:CGRectMake(20, 20, 140, 32)];
[btn setBackgroundImage:[[UIImage imageNamed:@"btn-red.png"]
      topCapHeight:0.0] forState:UIControlStateNormal];
[btn setTitle:@"BUTTON" forState:UIControlStateNormal];
[btn setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal];
[btn.titleLabel setFont:[UIFont boldSystemFontOfSize:14]];

Create a new UIButton of type UIButtonTypeCustom and then set the button skin as the backgroundImage. The horizontal stretchability is due to the stretchableImageWithLeftCapWidth and topCapHeight.

Here is a screenshot from the iPhone simulator showing the button in action:

  • gradient.php – the perfect gradient engine
  • gradient-button.psd – the photoshop source for the red button image, including the rounded alpha-transparent button and fused red gradients
  • btn-red.png – the red button image


When we launched the new and improved Gorilla Logic website, we decided to bring all our open source projects together under one roof. In order to migrate all things FlexMonkey back to our website, we need to get our forum data migrated out of Google Groups. Alas, Google doesn’t provide any way to export data from Google Groups. The only way to preserve the amazing contributions from the FlexMonkey community was to scrape Google Groups. So that’s just what we did.

With a very minimal amount of PHP, I was able to walk the entire FlexMonkey Google Group, scrap all the topics (aka threads) and all the posts inside each thread. The first step was to build a generic scraper class that grabs an html page (using cURL) and parses out all unique outbound links.

Here’s the code for the Scraper class:

class Scraper {
    private $url = '';
    public $html = '';
    public $links = array();
    public function __construct($url) {
        $this->url = $url;
    public function run() {
        $this->html = '';
        $this->links = array();
        //scrape url & store html
        $ch = curl_init();
        curl_setopt($ch, CURLOPT_URL, $this->url);
        curl_setopt($ch, CURLOPT_HEADER, 0);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
        $this->html = curl_exec($ch);
         //parse html for all links
        $matches = array();
        preg_match_all('#<a.*?href\s*=\s*"(.*?)".*?>(.*?)</a>#i', $this->html, $matches);
        if ($matches !== false && count($matches) == 3) {
            for ($i = 0; $i < count($matches[1]); $i++) {
                $href = $matches[1][$i];
                $val = $matches[2][$i];
                //unique links
                if (!array_key_exists($href, $this->links)) {
                    $this->links[$href] = $val;

In the run() method, cURL is used to grab the html. Next, a regular expression is used to match all outbound links. The links are are stored in a hash, while making sure they point to unique urls.

Built on top of the generic Scraper class is a specialized Google Groups scraper class, aptly named GoogleGroupsScraper. For a given Google Group, the url of the main page (containing a list of most recent topics) is:[GROUP]/topics

And the url of a single topic (aka thread) is:[GROUP]/browse_thread/thread/[THEAD_ID]#

Where [GROUP] is the name of the Google Group, and [THREAD_ID] is some alphanumeric id. Most importantly, at the bottom of the main page is an Older » link that points to the next page of topics. The GoogleGroupsScraper exploits this to spider the entire group, recording topic title and topic url as it walks each page.

Next, each individual topic page is scraped by the GoogleGroupsTopicScraper class and parsed into a list of posts with author name, date, timestamp, etc. The topic scraper uses various regular expressions to extract and massage the html to extract the different parts of each post. In particular, the post body needs a lots of work to strip out any Google Groups specific links and code.

Lastly, the topics and their posts are assembled into an XML document with a nice big CDATA block around the post body to preserve the html content.

Here’s some sample output from the scraper:

<?xml version="1.0" encoding="UTF-8"?>
<scrape group="flexmonkey">
    <title>FlexMonkey User Group is now located at!</title>
      <post idx="0">
        <date>February 10, 2010 21:17:52 UTC</date>
<p>People of FlexMonkey, <p>We have migrated the FlexMonkey discussion forum to <a href=""></a>. Please note that you will need to re-subscribe to the new forum to continue receiving FlexMonkey discussion messages. <p>-Stu <br>
    <title>Record button clicks based on Ids instead of names?</title>
      <post idx="0">
        <date>February 9, 2010 23:44:44 UTC</date>
      <post idx="1">
        <date>February 10, 2010 00:05:44 UTC</date>
      <post idx="2">
        <author>Gokuldas K Pillai</author>
        <date>February 10, 2010 00:16:34 UTC</date>
      <post idx="3">
        <date>February 10, 2010 01:18:42 UTC</date>

Finally, there is a very simple PHP driver for the scraper that runs the scraping process:

$scraper = new GoogleGroupsScraper('[GROUP]');
print $scraper->getXML();

And you run it as usual:

php scrape.php > output.xml

Just enter the name of the Google Group you wish to scrap, and away you go. Here are a couple of notes to help you along:

  1. [GROUP] is the group name as it appears in the url, so no spaces, etc.
  2. It’s not fast, so be patient, or modify the scraper code to generate some intermediate output.
  3. Via a browser, Google Group displays 30 topics per page, but via PHP & cURL you only get 10. Probably some Cookie or User Agent magic going on.
  4. Not much error handling. The error handling that exists isn’t very good. It will break.
  5. Good luck!

Please download the code and use it however you wish. Hopefully, putting the code online and writing this post will save someone else some time when migrating data off Google Groups.



I needed a very simple Twitter cache for a project I’m working on. And I was very happy to trade off some realtime accuracy for reliability. In addition to caching the tweets, I also needed to pre-process them into css-able html with clickable links, usernames, and hashtags. The web had a few nice examples of how to use regular expressions to parse the raw tweet text, but I decided to take what I liked and do the rest myself.


Here’s the PHP code for parsing links out of the raw tweet text:

$text = preg_replace(
     '<a href="$1">$1</a>',

I only wanted http and https links, with an optional query part (\?\S+)? and an option anchor part (#\S+)?. The conversion of a text link into an html link is done using back references, which in PHP is $1, $2, etc. In the expression above, I use $1 twice to put the matched link into both the href attribute and the link text.


Here’s the PHP code for parsing Twitter usernames:

$text = preg_replace(
    '<a href="$1">@$1</a>',

Nothing special, just take the @ and all following word characters (letters, digits, and underscores), and turn it into a user link.


Here’s the PHP code for parsing Twitter hashtags:

$text = preg_replace(
    ' <a href="$1">#$1</a>',

Getting the hashtags right was the most tricky of the three. I decided to only grab hashtags that were proceeded by one or more spaces. The real magic is the %23 in the query string, which forces a search on the complete hashtag, including the # part. For example, compare a search for #flex to a search for flex.

The Cache

The cache is just a simple cron job that periodically queries Twitter and retrieves the latest tweets. Most importantly, the cache fails gracefully if Twitter is inaccessible, which it does by doing exactly nothing if Twitter is down. This guarantees that my app always has valid data (when my server is up, the cache is up too), but with the possibility that the data is a little old.

Here’s the notable function in the cache:

function getTweets($user, $num = 3) {
    //first, get the user's timeline
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, "$user.json?count=$num");
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    $json = curl_exec($ch);
    if ($json === false) { return false; } //abort on error
    //second, convert the resulting json into PHP
    $result = json_decode($json);
    //third, build up the html output
    $s = '';
    foreach ($result as $item) {
        //handle any special characters
        $text = htmlentities($item->text, ENT_QUOTES, 'utf-8');
        //build the metadata part
        $meta = date('g:ia M jS', strtotime($item->created_at)) . ' from ' . $item->source;
        //parse the tweet text into html
        $text = preg_replace('@(https?://([-\w\.]+)+(/([\w/_\.]*(\?\S+)?(#\S+)?)?)?)@', '<a href="$1">$1</a>', $text);
        $text = preg_replace('/@(\w+)/', '<a href="$1">@$1</a>', $text);
        $text = preg_replace('/\s#(\w+)/', ' <a href="$1">#$1</a>', $text);
        //assemble everything
        $s .= '<p class="tweet">' . $text . "<br />\n" . '<span class="tweet-meta">' . $meta . "</span></p>\n";
    return $s;

First, we query the user’s JSON timeline using cURL. Second, we use PHP’s awesome json_decode function to convert the JSON into objects. And lastly, we iterate over the tweets and parse everything into our desired HTML output.

Here some sample output from my twitter feed:

<p class="tweet">Been reading Programming Goggle App Engine. Actually feeling dumber now than before I started. Too much to learn.<br /> 
<span class="tweet-meta">2:58pm Feb 14th from <a href="" rel="nofollow">TweetDeck</a></span></p>
<p class="tweet">Blog Post :: Async Testing with FlexUnit 4 :: <a href=""></a><br /> 
<span class="tweet-meta">3:33pm Feb 11th from <a href="" rel="nofollow">TweetDeck</a></span></p>
<p class="tweet">Blog Post :: A Better HTML Template for Flex 4 :: <a href=""></a><br /> 
<span class="tweet-meta">12:55pm Jan 25th from <a href="" rel="nofollow">TweetDeck</a></span></p>

Once I have the output, I can do whatever I want with it: save to disk, stick it in the database, keep it in memory, cache it in memcache, etc. In my case, I wanted the simplest possible option, so I chose to write it out as a static html file.

The end. The rest of the app’s not ready yet…

© 2021