All Feeds | XSL | RSS | Embed | Edit

More Google I/O take away

Some years ago, a well-known CEO stood on a stage and screamed that his company and the future was all about "Developers! Developers! Developers!" Since then, the sheer diversity of platforms has exploded, the web continues to advance in content and functionality, and the world supply of developers has grown with mixed result.

As a developer who takes pride in building advanced web applications, I have seen on more than one occasion the negative side of the focus on and proliferation of so-called "developers": trash code that is more often than not brittle, insecure, and clunky. To the average end user, this means that, despite all of the advances in processor, RAM, HDD/SSD, and network technologies, they continue to wait on the applications that they use on a daily basis.

Enter Google I/O. This year, perhaps more than ever, the recurring and overarching theme of the conference was performance. From Project Butter in Android 4.1 to a heavy emphasis on debugging and performance analysis in the WebKit Developer Tools, speed was the talk of the conference. And rightfully so.

Some of the tricks that I picked up at I/O for improving performance in my web apps are briefly described below. I would love to hear what you use to trick the most speed from your apps and/or your experience with one or more of these tools.


The compiler. Not necessarily the library. Closure, when advanced optimizations are enabled, is a powerful tool for automagically reducing the size of your JavaScript and optimizing those sections of your code that could run faster. I've been using Closure Compiler for more than a year now, and it's simply awesome.

WebKit Developer Tools

With outstanding debugging and in-browser CSS, Javascript, and HTML editing, WebKit's Developer Tools were already pretty awesome. But Google and the WebKit team have continued to push in this area over the last year to make it even more impressive. When it comes to performance, the Timeline feature alone is worth its weight in gold. (Okay, so it doesn't "weigh" that much, technically. But it is still awesome!) Essentially, this feature helps you find exactly what, when, and where, your application is running into a bottleneck. And it helps you make sure that your users will never see the "jank" that so often characterizes web applications.


Okay, so this doesn't really have to do with performance. But if someone can't use your application because they are visually or otherwise impaired, performance is a moot point anyway. Fortunately, the Chrome team has been working on a number of tools for helping developers improve the accessibility of their websites and applications. Tools such as High Contrast, ChromeVOX, and Accessibility Developer Tools go a long way toward making accessibility accessible for developers.

Server-side performance optimization

Too often, particularly with web applications, the real bottleneck is not on the browser. It's not even in the network. It's where the application relies on the server to do something. Make sure that your server-side stuff is optimized. Simple things such as setting expires headers, enabling gzip compression, etc., go a long, long way.

CSS optimization

Some CSS rules bring the browser to a crawl while others can achieve virtually the same effect and still scream. Particularly with CSS3 transform3d and more, you can offload a pile of stuff to the GPU and radically improve performance.

Smart JS

The V8 JavaScript engine is pretty smart. And the other JS engines out there (e.g., Safari, Mozilla) are not far behind. But understanding just how smart they are - and how that smart works - is essential. One very quick example is in the case of prototyped objects. If objects use the same prototype, they will be given the same hidden class in the JS engine. This means that they will share a bunch of optimized code behind the scenes, thus speeding up everything they do. But the instant that you add one custom property to one of those objects, you break it out of that shared, optimized code. Surely, you could hear the screeching sound of your application grinding to an abrupt halt. Do everything in your power to avoid giving objects which are otherwise identical to a thousand others on the page some custom property.

Jelly Bean

Android 4.1 (aka, Jelly Bean) includes a plethora of tweaks to the ICS UI and UX which basically make life faster, smoother, and better. And while this probably isn't a developer tip, I am going to throw it out there. You want Jelly Bean. Not tomorrow. Or next week. But now. And sadly - or maddeningly, I think, from Google's perspective - OEMs and carriers are going to drag their feet in deploying it, and when they finally do push it out the door, they'll figure out how to bog it down with some hack-job custom UI and a pile of bloatware. Demand a vanilla Jelly Bean device from your carrier sooner, rather than later. Tell them that you would even pay extra for the thing so they didn't cripple it with bloatware and custom (read that, junk) UIs.

So there you have it. Just a few of the more technical takeaways that I brought home from Google I/O. Of course, this is a very quick hit list. And in order to actually utilize these tips, you'll have to do more research on your own. I would suggest that you start by going to YouTube and searching for related sessions from I/O 2012.

To I/O and beyond...

In the movie Toy Story, Buzz Lightyear has a particular tagline: "To infinity and beyond!" It embodies the space ranger's sense of adventure and courage, and challenges the hearer's imagination to suspend reality and, for at least a moment, embrace the limitless possibilities of life and the future. In more ways than one, Google's annual developers conference, I/O 2012, was an exercise in exactly the same.

June 27-30, I was privileged to be one of the approximately 6,000 people to attend Google I/O 2012. This was my fourth I/O - I've been to all of them but the inaugural event in 2008 - but in many ways, this one was dramatically different than all the rest.

For starters, it was bigger. Of course, there were more people. And the conference was a day longer. But what I really mean is that Google pulled out all the stops to make this I/O a phenomenon. During the Day 1 Keynote, for example, they demonstrated Project Glass in spectacular fashion with a live parachute drop, stunt bikes, and rappelling. Then, at its conclusion, the Android team announced the single largest giveaway in I/O history (which is, for the record, saying something). Add to this a second giveaway during the Day 2 Keynote and more sessions than ever before (they did add a day, after all!), and you have the makings of something huge!

But I/O was also about a message. Going into the conference, I was hearing a lot of rumblings about how people were confused about Google's direction of late. We all knew that the company is committed to Android and the open web, but with rumors swirling of Facebook and numerous others poaching Googlers, the plethora of disparate projects that Google has been supporting for years, their rather questionable progress in the social arena (read that, Google+), the firestorm still raging about their commitment (or lack thereof) to privacy, and more, there were serious questions in the air. Throughout the conference, I truly believe that one of Google's primary purposes was to communicate that the company, its products, and (perhaps most importantly) its culture are alive and well.

So, what did I take away from I/O 2012? I think there were three things.

First, I learned a lot, particularly about performance. There was a pile of sessions dealing with Javascript performance, developer tools, and basic (and not-so-basic) tips for tricking the most speed possible from a website in general or a particular web application.

Second, Google is frustrated with the state of Android. Namely, they are frustrated that they continue to pour time, energy, and resources into developing an excellent mobile platform while OEMs and carriers squander it all with second-rate hardware, sad skins and bloatware, and an utter lack of commitment to upgrades. I can come up with no other explanation as to why they would hand out a Galaxy Nexus phone - the only phone to run a vanilla Android build and get OTA updates directly from Google - and enter the tablet market themselves with the Nexus 7. In particular, I think the Nexus 7 is a shot across the bow of Amazon (because the Kindle Fire, best-selling Android tablet by far, is still running Android 2.3) and Samsung (whose flagship Galaxy Tab 10.1 - i.e., the tablet Google handed out last year - is plagued with quality issues and still running Honeycomb rather than Ice Cream Sandwich). I think Google is recognizing that the fragmentation of the Android ecosystem will eventually - probably sooner than later - make the entire platform untenable, or at least undesirable. And they're hoping that, by spurring developers and the world with a taste of Jelly Bean goodness on good hardware will prompt their partners to get their heads in the game and their acts together.

And the third thing I took from I/O is that Google isn't quite sure about how to find the balance between pleasing shareholders and maintaining the creativity and culture that has made it Google since day one. So on the one hand, we have Google doubling-down on Google+ (which investors rightly believe is key to the search giant remaining viable and growing in the advertising space) and monetizing APIs and services such as the Custom Search and Translate APIs (which investors are understandably loathe to simply give away to anyone and everyone). And on the other hand, we have a company which produces products as blatantly whimsical as the Nexus Q and as boldly next-generation as Project Glass (which, to be honest, I think should be called VISOR in honor of the device worn by Geordi LaForge of Star Trek: The Next Generation). They continue such endeavors as GoogleTV while simultaneously introducing the Nexus Q, which essentially reproduces a significant chunk of GoogleTV's features. To be honest, it seems as though they're on a seesaw, using a scattergun, blindfolded, trying to hit a moving target.

Given these take-aways, what am I looking for down the road?

Well, on the web side of things, I anticipate that Google will continue to use Chrome to push the state-of-the-browser and advance the open web. The company is fully invested in HTML5 and actively working to push it even farther. I also see that performance is going to become even more critical because it affects not only user experience but also the bottom line. I see that mobile is going to continue to grow, and with it, there will be an ever-widening range of form factors (read that, display sizes) to consider when building a website or app. And I see that Google will continue to push so that, even given the wide range of form factors, there will be an unprecedented continuity of capability and user experience across all of these devices.

On the Android side of things, while the pair of Nexus devices in the giveaway on Day 1 was not exactly the opening salvo in a war on OEMs and carriers, it was certainly another warning shot. Google is growing impatient - and it's about time! - with the hardware manufacturers and carriers who are perpetually gimping what has become a mature and powerful smartphone platform. These partners need to look at the iOS ecosystem and realize that fragmentation is a bad thing (iPhone 3 can still run the latest iOS software with only a few caveats!) and their crumby customizations are hurting everyone in the realm. Build hardware that's not crippled. Don't shoot the OS in the foot on the way out the door. If you're going to build Android products, build Android products!

And on the corporate side of things, I hope we'll see Google figure out who they are and where they're going, and then be that company and go that way. Get over trying to impress the shareholders. If they're not impressed with your results to date, then they're probably insatiable. You're making money. Lots of money. Now, get back to making the world - particularly, the internet - a better place.

Dynamic, asynchronous websites made simple with JSONP

I remember the first time I discovered the potential of XMLHttpRequest. It was mind-boggling. I was in heaven with all the stuff I could do. I played like a kid in a candy store until, suddenly, cruelly, I ran headlong into two major problems with XMLHttpRequest:

  1. You couldn't get info from anything but the originating server
  2. XML stinks.

And then one day, I stumbled upon JSONP. I still remember the day. I had been working with the Google AJAX APIs, and I wanted to know how they got around the two inherent limitations in what I understood to be AJAX. So I dug into their obfuscated code and watched their network traffic for hints. And there it was: they didn't use XMLHttpRequest or XML.

Instead, they used a method called JSONP. (They just called it AJAX because that was the major buzz word at the time.)

JSONP is really a very simple, elegant, and flexible way to communicate dynamic data between client and server. And it's fast. Because it uses JSON rather than XML, the data transfers are smaller and the time it takes for the Javascript engine to parse the data is dramatically shorter. So in comparison to traditional "AJAX," JSONP screams.

Why is it, then, that most people haven't heard of JSONP? And why is it that, whenever I suggest it to people, they stare at me with this blank expression on their face as though I was speaking in some alien language?

Well, the only reason I can come up with is that "AJAX" is the buzzword. It just sounds a whole lot more cool (and clean) than JSONP. And so, since most people have heard only of AJAX, it actually is as though I'm speaking in some alien language when I suggest JSONP.

So let's fix both of these problems in one shot. From here on out, we'll call JSONP the "Super Spiffy Way To Communicate Data Between Your Client And Your Server" (SSWTCDBYCAYS).

Okay, let's not do that.

Seriously, though. Let's see if we can't figure out this mysterious thing called JSONP quickly and easily, once and for all.

In any dynamic web application, there are two sides of the equation: client and server. The client side is what's going on in the user's web browser. The server side is what's going on in - you guessed it - my web server. For the sake of this discussion, we're going to start on the client side, with the Javascript, since that's where the bulk of the hocus-pocus happens.

On the client side, everyone knows that an HTML page is comprised of a series of elements formed into some semblance of a hierarchical structure. (It doesn't matter if you don't get that. Just pretend and keep reading.) Some of these elements can instruct the browser to fetch additional resources from the server. For instance, <img> tags tell the browser to ask the server for an image file; <link> tags can ask the server for a CSS stylesheet, and <script> tags can order the browser to fetch a Javascript file. That last one right there is the key to JSONP.

You see, the secret to the black magic of JSONP is that you can use Javascript to dynamically construct additional <script> elements to append to your page. And the browser will obediently go and fetch those scripts from the server whenever you do.

So all we have to do is write a little Javascript function that uses DOM methods to build and append a new script element to the page. Something like this:

function do_json(url){
var el = document.createElement('script');
el.src = url;
el.type = 'text/javascript'; // gotta be PC here!

Now, any time you want to fetch information from the server, all you have to do is call do_json(<url_of_the_js_file_i_want>); .

Of course, that doesn't help us a whole lot if the server returns a blank stare. So for the next step, we have to turn to the server, where we write a script or program that will get data from any source you can dream up (e.g., database, XML, thin air, subspace transmission from Starfleet Command) and return it in a JSON format.

JSON, by the way, stands for JavaScript Object Notation. It's essentially a string which represents a Javascript object. Something like this:

"name" : "Jeremy",
"rank" : "Generally Cool Guy",
"serialNo" : "1234567890"

That is a very simple JSON string representing an object with three properties: name, rank, and serialNo. So the bulk of this step in the JSONP process is getting my server to return data in that sort of format. Of course, exactly how you will do that will depend on your backend and language of choice. So I will let you Google how you might do that.

Once you figure out how to get your server to return JSON, we still have a problem. Simply returning JSON to the browser is like taking a phone message without writing down who it's for. Now, you could set the message on the corner of your desk in the hopes that the right person will happen by, notice it there, and grab it. But the chances of that happening are pretty slim. So our server has to be a little more specific. Namely, it has to encapsulate the JSON string in a Javascript call. So instead of the string we had above, we end up with something like this:

"name" : "Jeremy",
"rank" : "Generally Cool Guy",
"serialNo" : "1234567890"

Now, that should look familiar. It's plain Javascript. You're calling the function my_callback and passing it the Javascript object you created above. Simple enough.

The problem now is that my_callback isn't home on the client-side just yet. So let's add it to the client-side Javascript code that we had before.

function my_callback(response){
// do something!

All of a sudden, my_callback is eagerly awaiting its message, and as soon as it gets it, it can take off running.

Now, this is just the tip of the iceberg. Because the actual body of the server's response could be an array, an object, a number of arrays or objects, or any number of other things. And you could use it to rewrite your page, update particular elements, and much, much more. The sky is the limit! And because you're not using the SOP-restricted XMLHttpRequest, you can request data from any server on the web, making it possible to create services on one server which can be utilized from any number of sites. And it's a whole lot faster, to boot!

Once you grasp the concept of JSONP, your mind will just start seeing possibilities for uses. Search engines. Maps applications. Updating content. The list is virtually endless.

Well, folks. That's all the time we have for today. We hope you've enjoyed the broadcast! Please tune in next time as Jeremy writes about taming your very own water buffalo!

Google I/O 2012 expectations

So, I happen to be one of the lucky 5,500-ish people who has a ticket to Google I/O 2012. That is, if the confirmation email I received is to be believed. This will mark the fourth time I've trekked to San Francisco and the Moscone Center for what has become one of the most anticipated technology events of the year. (In fact, you could probably make an argument that it is the most anticipated tech event, but I will leave you to decide how it ranks with CES and WWDC, etc.) But this year is different in a number of ways for me.

In the past, while I have always looked forward to the tech stuff, I/O has been an annual opportunity to meet friends from around the globe that I never see otherwise. This year, however, none of my friends are making the trek for I/O.

Also in the past, I have scheduled flights to and from SFO so that I would be there and back as quickly as possible (i.e., I arrive late afternoon/evening the day before and depart early morning the day after). However, this year, the airlines have left me with what is going to amount to a half day of nothing on either end of the trip. So I may actually have a chance to get out to see a couple of sights.

I suppose, though, that all of that is a moot point. And it's probably not why you're reading this post. With less than two weeks to go before the opening keynote of I/O 2012, you're looking for what I'm expecting to see and hear coming out of the Great Big G. So here are some thoughts that I've been contemplating.

So, that's about it for my expectations. I would love to hear what others are anticipating at I/O!

What I learned about the canvas element

So, I am apparently a sucker for challenges. A couple of weeks ago, my friends at the Des Moines Web Geeks issued the challenge to re-build in Javascript a simple dungeon game, which one of the organizers had built with his son in Python. I was unable to be at the coding dojo itself, but I joined in via Google Hangouts. And for some reason, I was hooked on the challenge.

But it was more than just the notion of rebuilding this simple game. The idea behind the whole thing was to teach someone how to code, a prospect which I have long thought to be a distinct challenge for our times. And we were supposed to use OOP JS as much as possible, something which I really enjoy. And then there was the fact that, beyond the basic parameters of the game, the concept was left wide open, which immediately got my imagination going.

So I decided from the very start that this little game, for me, would have two primary objectives:

For the thing that I would learn, I decided I would explore canvas to build a dashboard of sorts to display at a glance the status of the current player, the monster in the room, and a map of the world. Namely, since I had never before utilized the HTML5 canvas element in any project, I wanted to learn some of the basics and then explore just what the thing could do. I was incredibly pleased with what I came up with, and I wanted to share just a couple of quick lessons that I learned in the process.

The first lesson has to do with the canvas element itself. When I first started using canvas, I was simply putting the basic element into my HTML and then setting width and height with CSS. At first, things looked great. But as soon as I started drawing in the canvas, I realized that something was messed up. The images were distorted somehow. For example, a 5px vertical stroke was thicker than a 5px horizontal stroke. And then I noticed that filled rectangles that should have filled the entire canvas were all messed up. And I was compelled to realize that, somehow, there was a discrepancy between the internal dimensions of the canvas (i.e., how many pixels it thought it had) and the external dimensions of the canvas (i.e., how many pixels the page thought it had).

Sadly, it took me quite some time to find a tutorial or other documentation which acknowledged what I thought must have been an issue. But it wasn't an issue. Rather, it was a design feature of the canvas element.

<sarcasm>Imagine my surprise.</sarcasm>

So here's the deal. The canvas element's height and width attributes (i.e., the height="xxx" width="yyy" bits you put in the HTML) set the dimensions of the drawing space within the canvas element. While the CSS height and width rules set the dimensions of the element in relation to the rest of the page. The thing is, those two things don't necessarily correspond.

In fact, the canvas element, regardless of its size in the CSS, will have a drawing space of 300 pixels wide and 150 pixels in height. Unless, that is, you specify otherwise. The question is, how do you do that? Well, there are two options.

The first option is to specify it in the markup. Namely, by providing the height and width attributes in the opening tag, like this:

<canvas id="myCanvas" width="1000" height="1000" />

That's not so difficult. But what if you want the canvas drawing space to be exactly the same as the canvas element's actual dimensions on the page? And what if the actual dimensions are given in a percentage or may change? Try something like this in your JS onload callback (or onresize, etc.):

var el = document.getElementById('myCanvas');
el.setAttribute('width', el.offsetWidth);
el.setAttribute('height', el.offsetHeight);

And there you have it. One of the things I learned about canvas elements while playing with this simple little game.

i love my wife

Adventures in browser detection

So, today, I came across an interesting issue. I am working on a total overhaul of my personal CMS, and part of that includes a custom WYSIWYG editor. The editing happens inside an iframe element, but in order to make it a seamless experience, I want the iframe to dynamically resize itself according to whatever the contents need for dimensions. Makes sense, right? Well, then I come across this little beauty:

In order for it to work right, I have to use the dimensions of the iframe's body element in Chrome. But I have to use the dimensions of the iframe's html element in Firefox. Talk about irritating. So now I have to distinguish between these two great browsers. I think to myself, "Oh, well, that shouldn't be too difficult." And I tried using navigator.product. No joy.

Turns out, Chrome sets navigator.product to Gecko, just like Firefox. Even though Chrome is NOT Gecko.

So I thought maybe appName. Chrome sets its navigator.appName to... "Netscape".

appCodeName? Chrome sets it to - you guessed it - "Mozilla".

So I was compelled to check the userAgent string. But because Chrome and many other browsers report their UA strings as "Mozilla/5.0..." I knew Mozilla was out. I thought about looking for Firefox, but there are plenty of other Gecko-based browsers. So I thought, maybe "Gecko."

But, oh, wait. Chrome reports, among other things, that it is "KHTML, like Gecko."

So I ended up having to check for "Gecko" and then make sure that it wasn't "like Gecko."

If you're interested, here's the code I used...

if(navigator.userAgent.match(/Gecko/i) && !navigator.userAgent.match(/like Gecko/i)){ // Must be dealing with Firefox, etc.

Just because life in webdev land can't be easy.


In my last post, I discussed a recent project in which I was porting a project which relied on Google Maps API v2 to Google Maps API v3 and the problems I ran into trying to utilize the LocalSearch component of Google's AJAX Search API with the new version of the Maps API. In this post, I want to talk about the second major pitfall I ran into while working on this project: several key features of v2 have not (yet) been added to v3.

Now, in most cases, this is inconsequential, to say the least. The Maps API dev team established performance as a higher priority than making sure the API included three different styles of kitchen sinks, so they eliminated some of the not-as-utilized features of the API, and v3 screams compared to the rusty groan of v2's bloated code as a result! (Read that, v3 is AWESOME and WELL WORTH the jump!)

But I do miss at least one of the features that they omitted this time around: the GOverviewMapControl. In fact, for the project that I was working on, the client specifically requested the overview map that appears in the lower-right corner on and gives you at least some semblance of where you are in the world when using the larger map. So this wasn't just a trivial thing for me. I needed an overview map control.

So I went searching, and I found an issue on the GMaps issues list where people requested this exact feature. As of this writing, 83 people had starred the issue, meaning that they wanted it. And in fact, it was actually "acknowledged" by the dev team. Unfortunately, though, the issue was created July 2009, acknowledged a day later, and now, over a year later, there is still no overview map in GMaps v3.

Considering that the project was not at the top of my priority list, I didn't worry about it too much. I simply starred the issue and decided to work around it as best I could, hoping that, by the time I was ready to finish the app, the control would be included in the v3 code. Alas, however, I reached a point this week where I was wrapping things up and still needed the control.

So I wrote one.

For anyone still looking for an equivalent to the GOverviewMapControl in Google Maps API v3, I've created a new Google Code project for my solution, which you can check out at

As with the last post (check it out here), though, I will take a couple of seconds to make some notes.

First, I did take a couple of "liberties" to make my version a bit more fun. Namely, I animated the opening and closing, and I made it automatic. This is the default behavior. In the next couple of days, I think I will make the automatic opening and closing optional. Maybe I'll make the animation optional, too. I just haven't decided on that part yet.

Second, you may notice that, when you drag the polygon to the edge of the overview map, the overview map does not pan automatically. This is something that I have thus far been unable to do smoothly. If you have any thoughts on how I could accomplish this, I would love to hear them.

Third, I am aware of an issue where releasing the polygon after dragging it around does not trigger the event to move the map. I'm working on it. If you have any ideas on fixing it, let me know.

Fourth, there is no fourth note. I just inserted this paragraph for kicks. It kind of breaks up the list of notes and gives me a chance to use the word "sixth" in a blog post.

Fifth, because I have no special permission from the GMaps team, I have made no effort to eliminate the Google logo and terms notices on the overview map. These do eat up screen real estate, but they also make sure that the Google Legal (Beta) team doesn't come after me.

And sixth, you should be aware that I am using a real hack to put the control on the map. Namely, this control is not added to the map by pushing it into one of the 8 custom control trays. Rather, it is simply appended to the map parent div. So you have complete control over where it is placed, etc., in the CSS. In theory, you could put it front and center on your map, or you could use it to cover one of the essential components of the map (e.g., the Google branding and/or licensing notices). I assume no responsibility for any consequences you may incur by changing the positioning rules specified in the default.css file.

So there you have it. An overview map control for GMaps API v3. Now go and map something!


For some time, I've been working to port a large project that I did utilizing the Google Maps API v2 to the new version 3. This work has been slowed by learning curve, scheduling, and the fact that I decided from the start that, rather than simply port the existing code I would re-engineer the application from the ground up in an effort to improve its performance and learn some new tricks (e.g., GMaps v3, Closure Compiler).

As I worked, though, I soon realized that I would eventually run into a couple of roadblocks to completing my project. The first of these was that the application incorporated the Google AJAX Search API to allow users to search the map for cities, neighborhoods, etc.

Now, to be clear here, the Search API will work with GMaps v3. The Maps dev team has cooked up a few examples to demonstrate this. But it will not integrate as tightly with v3 as it did with GMaps v2. Namely, you cannot hand a v3 map as an argument to a LocalSearch object without causing some interesting fireworks in the error console.

This is an unfortunate thing because the LocalSearch object (in my opinion) did a better job of finding relevant results when paired with a GMap than in any other mode (e.g., using a "near..." or GLatLng center). And I absolutely wanted that capacity in my application.

So I ran into problem 1: I needed a LocalSearch object which would integrate with my GMap v3 application.

But once I stumbled across that realization, I also knew something else. One of my favorite little toys from the AJAX Search API dev team is called the Local Search Control (LSC). GMaps aficionados will know it by its alias, the GoogleBar. In my humble opinion, a map that you can search for cities, businesses, etc., may be the most obvious and universally useful application of the GMaps APIs there is. But without a solution to problem 1, there could be no LSC.

Well, I guess you could make one if you really wanted. But it just wouldn't be the same. So I ran into problem 2: No Local Search Control (aka GoogleBar) in GMaps v3.

So, since (a) I needed a solution to problem 1 for my project at hand, (b) a solution for problem 2 wouldn't be all that much more work, and (c) I have a hard time resisting Ben Lisbakken when he's batting his eyes at me (okay, not that hard a time), I thought I would sit down and come up with something that would benefit the whole GMaps ecosystem.

The result is an open source project on Google Code, which you can check out at

Since this is my soapbox (aka blog) and I can say anything I want here, I will point out three things about this project.

First, it was not my intention to duplicate every aspect of the LSC. The original LSC was a brilliant piece of coding which included multitudes of options, etc. It was my intention merely to duplicate its essential functionality and what I thought were likely its most-used options.

Second, I wanted to make it more consistent with the GMaps v3 syntax model, which is (I believe) cleaner and more efficient than v2 ever was. Therefore, there is no need for 53 lines of code (okay, that's an exaggeration) to initialize and set up the options for this control. You can initialize it and specify all the options you'll ever need in one line of code.

And third, I am not an ads fan. Furthermore, since I'm not on Ben Lisbakken's list of Facebook friends, I'm not allowed access to the API that Google used to implement AdSense support in the original LSC. In other words, my GoogleBar has no AdSense. Before you stone me or something equally extreme, please know that I am not particularly opposed to implementing this support in the future. It's just that I built this thing to allow webdevs like me to build searchable maps. Not revenue streams. So it won't be all that high on my priority list to work this out.

So, my version of the LSC has been tested successfully in current versions of FF, Chrome, Safari, Opera, and (much to my own chagrin) MSIE 7 and 8. My own aversion to MSIE means that it has had the least testing of all platforms. And I have no intention of testing it in MSIE 6. So if you fire that dinosaur up, you're on your own.

As always, thoughts, comments, issues, suggestions, and contributions are more than welcome. Otherwise, until next time!

cringer 2

So, it's been several months since I posted, so I thought it would be good to jot down a few notes about cringer. Especially since a few significant developments have occurred since the last time I posted.

In case you don't know what cringer is, it's an IRC bot designed to utilize Google's AJAX APIs (well, the RESTful side of them, anyway) and various other interfaces to provide a valuable resource for the #googleajaxapis channel on Freenode. In addition, the code can be easily adapted to provide similar resources for other channels. In fact, in recent weeks, it's been deployed in the #lilypond channel (also on Freenode) with some interesting tweaks under the name fringer.

So here we go with a few tidbits. First, after a botched upgrade to Snow Leopard, I ended up with an opportunity to rewrite cringer from scratch. I say opportunity because, even before the upgrade, I had been thinking about a number of significant improvements which would streamline the code and dramatically increase functionality, flexibility, and stability.

So second, cringer2 is now online and running. cringer2 is a complete rewrite and introduces a modular design. To add functionality, you have only to write a simple Perl module, drop it into the proper folder, and restart cringer. I have contemplated dynamic loading of modules, but I am leery of the security risk that this could entail (i.e., strange people loading arbitrary modules and causing trouble, etc.). Even with this limitation, adding functionality is dramatically simplified.

Third, among the modules for cringer2, I have included a number of functionality improvements. Thus far, the modules I have built and deployed are search (for use with the Google Search API), fetchfeed (for use with the Google Feeds API), smartalleck (for making obnoxious comments), translate (for use with the Google Language API), weather (utilizes the Yahoo! Weather RSS feeds to provide current conditions and forecast details for a given locale), and twittersearch (for use with the Twitter Search API). In addition to these, I intend to develop and deploy a yspell module (for use with the Yahoo! Spelling API), and I will contemplate a buzz module which will utilize the Buzz API to search Google Buzz posts.

Fourth, it is still my intention to develop and deploy a module which will eval simple Javascript code and return the results. However, I continue to be plagued by the security ramifications of such functionality. In addition, I am having a difficult time deciding which JS engine I want to use. OSX includes an interface to Safari's engine which is really very good, but I can't guarantee that the code will be cross-platform that way. Alternatively, I could install TraceMonkey or something like that, but that has proved to be something of a hassle.

And finally, I have posted both cringer1 and cringer2 code to the Google Code project. You can check it out at! Feel free to download the code and tinker with it, as long as you don't forget to share your improvements with everyone else!

Over the next several weeks, I intend to get the additional modules mentioned above up and running, but I also intend to allocate a box which will be dedicated to cringer and a couple other projects in the hopes of improving performance, up-time (as opposed to running on my laptop which is off at nights), and security for my personal system.

So that's all for now.

The progress of cringer

So, in my last post, I talked about cringer, the gajaxapis-irc-bot that I'm working on. In the month+ since I posted that, cringer has gone from a concept to a functional, if still experimental, bot which utilizes the Google AJAX Search and Language APIs to provide useful functionality in the #googleajaxapis channel on . It has also been infused with the ability to make snide remarks about things like MSIE and Flash/Flex. And it has a help facility which allows users to utilize its different resources (and find interesting tidbits about several regulars to the channel).

To be certain, cringer has developed quite nicely, thanks in large part to the ideas shared by friends and colleagues in the channel. But one thing has eluded me thus far. I have found myself unable to implement the ability to evaluate Javascript in the channel via either SpiderMonkey or (as one of the Googlers in the channel suggested) V8. Now, this is undoubtedly due in large part to the fact that I've never attempted to compile one of these JS engines independently before. So this week, as I have opportunity, installing at least one of them - probably SpiderMonkey - on my Mac will consume my idle time. If anyone has any insights, please don't hesitate to send them!

Building a better bot

So, it's no secret that I spend an unhealthy amount of time hanging around the Google AJAX APIs. Between the Google Group and IRC channel, I certainly find enough to do to keep myself out of trouble (or get myself in trouble, if you ask my wife). For the last several months, people in the channel have kicked around the idea of building an IRC bot to help out in the channel and have a little fun. Well, this past weekend, I sat down and started tinkering.

The result:

Now, I know there's no code there yet, but it's coming. cringer is a Perl-written IRC bot that will eventually run Javascript, search the web, listen to your troubles, and even make snide remarks about MSIE and Flash.

So far, I've learned that getting Perl to talk to IRC is extraordinarily simple with the right modules, giving it a very rudimentary level of artificial intelligence is a snap, and getting it to run with DBD::mysql means making sure you have the right version of MySQL installed on your computer. What I'm needing to learn now is how to make it interact with TraceMonkey, Mozilla's brand-spanking new JS engine. So it's back to work for me!

The marvels of a good host

I do not have my own data center. In fact, I don't know of too many people who do. The things are large, and complicated, and take an abundance of resources and manpower to run and maintain. In fact, as much as I'd love to, I don't even have my own production server due to the prohibitive costs of sufficient bandwidth in my area. So I, like countless other webdevs, am compelled to rely on hosting companies that all too often offer the world but deliver nothing more than a box of rocks.

In the past nine months, I have struggled with two such hosts. Although it is not my intention to badmouth anyone (thus I won't go into details), I do feel it important to mention the companies as a cautionary for others who might consider them today or in the future.

With the first, Christian Web Host (now a reseller for Jumpline), I had enjoyed an excellent relationship for several years. Their service was always reliable, their support was always prompt and relatively competent, and their prices and offerings were at least in the realm of competitive. Then, about ten months ago, all of that changed.

And the second, StartLogic, I had hoped would be an adequate replacement for CWH as my go-to host. On paper, they offered unbeatable pricing and service, but within just a couple of months of signing on the dotted line (right after the money-back guarantee expired, of course), a number of issues with performance and a glaring problem with the quality of support quickly spoiled that hope.

So the third time around, I was earnest about finding a good host. For about six months, I researched packages and pricing schedules and read reviews on an almost daily basis. To be certain, it was nearly overwhelming. Several times, I thought I had narrowed the list to one or two, only to then stumble across a number of reviews that brought into question one or more of the things that webdevs find important in a host. Finally, though, one company managed to find its way to the surface.

I was impressed when they responded to my sales inquiry within fifteen minutes. I was astonished when, the first time I contacted support, they answered - competently - within the hour. And I was overwhelmed last week when they helped a client in an incredible bind when their former host hosed their site. But I must take a moment to explain.

The client had built an incredible library of topographic imagery weighing in at over 155GB which they tied into a Google Maps interface to showcase and sell. The host they were with had billed their service as "unlimited storage" and "unlimited bandwidth," but apparently had an undocumented limit of 25GB storage. When they discovered the massive photo library, they of course sent notice that it would be removed for taking up too much space. The problem was that, since they sent so many promotional emails and such, the notice went into the client's junk mail system. There was no follow-up of any kind until one day the client discovered that the whole thing was gone.

I should mention, since most of us don't upload that kind of data on a daily basis, that uploading 155GB to a new server would have taken my client more than a month, even assuming they could absolutely saturate their upload capacity.

Enter HostGator.

Not only did HostGator set up a dedicated server for a reasonable price, but they then went above and beyond to converse directly with the data center so that, rather than having to upload the entire folder again, my client could mail a hard disk directly to the data center and have it connected to the server. It was a violation of ordinary security procedures, but HostGator support made it happen.

So the moral of the story is simple. HostGator has it all. Performance, reliability, support, and an unusual dedication to meeting the customer's needs.

Now, who knows. HostGator could tank in a few months or a few years, but for the time being, they've earned my seal of approval.

Top 10 reasons why I hate Internet Explorer (Warning: more ranting by me!)

As a web developer, there are a few mantras that I live by. The first, since it's not my primary job, is that web development needs to be fun. The second is that virtually anything is possible on the web, given enough time to figure it out. And the third is that any and all incarnations of Microsoft Internet Explorer are the scourge of the world.

Now, I understand that this third mantra seems a little harsh, so let me try to soften tone a little bit. I hate Internet Explorer. And after spending the last two days trying to make something that should have been simple work - again - in Internet Explorer, I thought it was finally time to tell the world why. Well, the top ten reasons why, anyway. So, here they are, in true Letterman style.

  1. With a rapidly dwindling share of the browser market, Microsoft's attitude that the internet is Internet Explorer (or, rather, that IE is the internet) is the epitome of hubris. No product is the penultimate, much less Internet Explorer.

  2. Standards? We don't need no stinking standards! We are the standard! Since Internet Explorer 6, Microsoft has included a "standards" mode in IE that was supposed to conform to W3C HTML standards. Strangely enough, though, neither IE6's nor IE7's rendition of these standards - yes, we'll talk about that in a moment - came even close to actually fulfilling the standards by which every other browser on the planet lives.

  3. 6 years. Yes, after feverishly developing the thing in the late 90's so Microsoft could crush Netscape and lay siege to the entire browser market, that's how long IE6 sat gathering cobwebs before Microsoft decided to dust off their browser, give it a facelift, and release IE7. In 6 years, we went from Windows ME to Windows XP, and very nearly Windows Vista. There were three versions (maybe more) of Microsoft Office released, and broadband went from a tiny sliver of connections to the majority. 6 years. In most industries, 6 years without a new product mean you are out of business.

  4. Convoluted Javascript namespace. In the 90's, it was supposed to make life easier. But that was before the dawn of the broadband era and complex, web-based applications. Back then, it was quick and easy for developers to be able to reference HTML elements directly by their id attribute, but now it is increasingly common to need variables with the same name as some of your attributes. How many times have I built an application, only to find out that there was something else on the page that conflicted with what I just wrote! (For the record, I always preferred the Netscape DOM that required you to walk down level by level. The document.getElementById method is a great shortcut for this that still doesn't confuse the namespace).

  5. ActiveX. What in the world is that? Things that should have been in the browser in the first place, but Microsoft thought it needed to put its own name on? And while we're on the subject of ActiveX, what is up with Msxml2.XMLHTTP and Microsoft.XMLHTTP? Couldn't they make up their mind? I guess "breaking the internet" is bad unless it means you get to put your whole name on it?

  6. Crash. Back in the NS4 days, Internet Explorer seemed relatively stable. But relatively is still not stable! Indeed, Javascript should not be able to crash your browser!

  7. Windows. I understand that Microsoft argued in its antitrust suits that the web browser would become an integral part of the operating system, but they clearly didn't believe it since they never developed their web browser with their operating system. To restrict an application to one particular OS is ridiculous. When that OS is Windows - think Vista - it's downright insane.

  8. Horrifying scripting environment. There are at least three components here. First is the fact that some code that runs in one version explodes disastrously in another. Second is the utter lack of real developer tools. Yes, I know there are some third party resources out there, but compare them to the likes of Firefox's error console (not even Firebug, which blows them all out of the water), Safari's Developer menu, Opera's developer plugin, and you'll find IE wanting every time. And then you add in the insanely unintuitive nature and structure of things. For instance, an acquaintance of mine (who by the way runs - a great web-based IRC client that provides translation on-the-fly using the Google AJAX Language API) noted today that the recommended method for obtaining the position of the cursor in a text box is to (1) insert some unique text, (2) grab the value of the input, (3) find the index of the unique test, and (4) remove the unique text. What? Add in a totally whacked event model, an irritating tendency to do things you didn't tell it to do (e.g., resizing an element that you didn't even touch), and an inexplicable penchant for not doing things you did tell it to do, and you end up with nothing less than a web developer's nightmare.

  9. 3 totally different rendering modes, at least. No, that's not a typo. Internet Explorer 6 has two rendering modes: quirks and standard. You have to force it into standards mode with a doctype declaration. Internet Explorer 7 has its own quirks mode (which we'll consider the same as IE6's), plus IE6-esque mode (where it rendered standards-based pages as IE6 would) and then its own standards mode. As if that isn't confusing enough, the last version of IE:Mac, which still persists in some dark, dark corners of the world, had another totally different rendering model. And now, the news is that IE8 is coming with two different "standards" modes - one to conform to IE7, and the other that will be closer to the real W3C standards (but still not there!). I guess, for Microsoft, "standards" is an oxymoron. Unless, of course, the standard is non-standard.

  10. It could probably go under horrifying scripting environment, but this is so criminal that it deserves its own spot - at the top of the list, no less. Cryptic and/or patently insufficient error messages. How many times have I been working on a page only to be stopped cold by a scripting error in IE. The only message I get from IE, though, boils down to, "You have an error on line xxx." The box does provide a file name, but it is always filled with that of the page itself, regardless of where the error actually fired. This is a huge problem when modern sites can often include 5, 10, 20, even 50 Javascript files, all of which may have a line xxx. It does, on occasion, include an error message but that can range from something as unenlightening as "Object expected" to something as absolutely maddening as, "Unable to complete due to error 80020003." And it's even worse when you realize - about the first time you try to debug something with all of this information - that even that line number you received was wrong. This isn't a once-in-awhile type thing like it is with other browsers and debugging environments; it's a more-often-than-not type of thing. In my world, this one problem alone contributes to more time (and hair) lost than just about any other hurdle I have to deal with.
So there you have it. The top ten reasons why I hate Microsoft Internet Exploder Explorer. If you're a developer, I'm sure you've found your own reasons to hate it, too. In fact, even my non-developer friends are realizing that it's a terrible, terrible browser, including some that are otherwise staunchly pro-Microsoft.

This list is absolutely not exhaustive. If I had more time (and you had more patience), I would include things like filters, horrifying inadequacies in CSS rendering, and a number of others. But for now, this will have to do. And the adventure continues...

What I learned on today's adventure:
  1. Internet Explorer stinks
  2. Any computer infested with Internet Explorer should be hauled off to the dump
  3. Development is a nightmare in Internet Explorer
  4. Internet Explorer's market share can't go down fast enough

How to get all the results available from your Google AJAX Search API application

If you haven't heard of it yet, the Google AJAX Search API is a pretty nifty little doo-dad. Basically, it allows you to add Google search functionality without having to divert visitors from your site, hack the Google infrastructure, or even deviate from your own formatting rules. It works like this: you build a control of some sort that will process user input; form a request URL; and get information, including web, local, news, video, image, blog, book, and even patent search results via JSONP. Pretty slick, if you ask me, but one of the questions I often receive regarding the API goes something like this: "It's a cool concept, but how can I get more results?" You see, using it, you can currently only get up to 64 results (less for local and blog searches), and you can only retrieve those in blocks of 8.

So for today's adventure, I thought we could explore getting as many results as we possibly can from the Google AJAX Search API. But right off the bat, there is a disclaimer: I'm not going to tell you how to get around the limit of total results you can retrieve. The AJAX Search API is designed to provide rudimentary search functionality for the users of your website or other application, not for SEO, data mining, or even deep searching. And it does that intended task admirably. To try to bypass the total results limit is a violation of the service's Terms of Use and, I believe, generally unnecessary when using the API as intended.

Rather, what we're going to talk about is getting all the results that they will let us get in one fell swoop.

So, how do we do it? Well, first things first; we set up our searcher as normal. Because the default is obfuscated, we're going to have to use a RAW searcher. If you don't know how to do this, you'll want to check out the documentation at

Once that's done, there are two methods of the searcher object that we're going to take advantage of: .setSearchCompleteCallback() and .gotoPage(). Details for both of these methods are provided in the class reference.

Essentially, the process, broken down into its component steps, is going to go like this.
  1. We set up the searcher's completion callback
  2. We execute a search query
  3. The query returns, executes the completion callback
  4. The completion callback checks to see if there are more results
  5. If there are more results to get, calls gotoPage(...);
  6. Otherwise, finish processing the results
So, are you ready for this? Here's the code, commented up so you can see what's going on.

// Initialize the searcher object, in this case a WebSearch. = new;

// Set the search complete callback. Notice that we're using the searcher itself as the context. This is going to allow us to use this in the callback to refer to the searcher.

// Set a handle to the cursor object that we're going to use repeatedly
var cursor = this.cursor;

// Create a new property on the searcher that we can stash results into so they don't disappear when we go to the next page.
if(!this.allResults || cursor.currentPageIndex==0){this.allResults = [];}

// Add the new results to the other results
this.allResults = this.allResults.concat(this.results);

// Check to see if the searcher actually has a cursor object and, if so, if we're on the last page of results. If not...
if (cursor && cursor.pages.length>cursor.currentPageIndex+1){

// Go to the next page.

// Else, if there is no cursor object or we're on the last page...
} else {

// Loop through the results and...
for(var i=0; i<this.allResults.length; i++){
var result = this.allResults[i];

// Plug them into the document where we want them.

Check it out in action here.

And there you have it! All 64 results from a Google AJAX Search API WebSearch, in one shot.

What I learned on today's adventure:
  1. I can use the Google AJAX Search API to get up to 64 search results for use on my site or in my application.
  2. How to use searcher.setSearchCompleteCallback and searcher.gotoPage to get all available results in one fell swoop.

I hate Flex/Flash. Really, I do. (Warning: This is me ranting!)

So, for today's episode of Adventures in Web Development with me, I thought I would talk for a moment about just how much I hate Flex and Flash development, and why. Let's start off with how much. I hate Flex development about as much as water hates oil. (And believe me, after a little incident this week involving two toddlers and a whole value-sized jar of petroleum jelly, water and oil don't go together.) And I hate Flash development about as much as ice hates a blowtorch. Yup. I pretty much don't like it at all. You want to know why? Well, let me count the reasons:
  1. It's a pain in the rear to have to code and code and code, then export/compile, debug, code and code and code, export/compile, debug, etc. Coding is frustrating enough at times; the introduction of the compile process, along with the painful debugging tools available, makes this absolutely unbearable. Of course, part of the reason I had such a hard time here is because I wouldn't spring for the higher-end development tools, but that actually brings me to reason number two!
  2. Outrageous pricing. While the Flash player is free, and Adobe calls Flex an open source deal, the only official IDE's, Flex Builder and Flash Pro cost, literally, hundreds of dollars. I'm cheap. So I guess I get what I pay (or don't pay) for, but I really don't think it should cost me US$250 for the "open source" platform, let alone US$700 for the "Pro" IDE. And by the way, the next one is definitely related to this one.
  3. All documentation assumes you have one or the other Adobe IDE. I love the Perl mantra that there is more than one way to do things; Adobe and everyone else who develops Flash applications apparently believes there are exactly two ways of doing it: Flash Pro and Flex Builder. It is quickly frustrating when all the tutorials talk about using tools in those applications.
  4. The documentation is outright terrible. Generally speaking, the web is a great place to find resources for programming. That is not the case when it comes to Flash. Is there stuff out there? Absolutely. In fact, there are so many blog posts covering so many topics that it might seem strange for me to say that the documentation stinks. But the reality is that the presence of those blog posts actually speaks to the quality - or lack thereof - in the official docs. And frankly, when you leave documenting your stuff up to independent bloggers, the majority of which are trying to make a living building Flash apps, you're bound to end up with a lot of junk. Case in point: for the last several weeks, I've been building an application that I thought was completely ready. It ran beautifully until it had to parse an image feed from Picasa. Then it choked. Why? Because you have to reference xml elements with their fully qualified name when dealing with multiple namespaces. More specifically, you have to declare a QName variable. Then you have to go looking for your nodes. You can't combine the two steps into one line like you can with Javascript or any other respectable language. But of course, no one ever tells you that!
  5. The whole plug-in thing. When I bought my first laptop, it was with the express purpose of not having to plug it in to anything to get the thing to work. I want my browser to work the same! Plug-ins take up space on my HDD and eat up my RAM. My browser should be able to do just about everything I need to do, and in fact most browsers do with a little finesse. But we'll talk about why I hate Internet Explorer in another post (or posts). For now, suffice it to say that I resent having to take the time and everything else needed to run a plug-in to get things done.
  6. It shouldn't take me thirty days to figure out how to do something - anything - in an application. When I started the whole learning process, I initially downloaded the Flash Pro trial version. For thirty days, I tinkered with it, but made very little headway. I know; it's probably because I'm a little bit busy and a little bit slow on the uptake, but I've talked to others who have had the same problem. So it can't be all me!
Okay, so there you have it. Six reasons why I disdain Flash, Flex, and everything that has to do with them. If you disagree, well, I guess that's your prerogative. But as the apostle Paul said in Philippians 3:15, "If on some point you think differently, that too God will make clear to you."

What I learned on today's adventure:

Check out my guest post on the Google AJAX APIs blog!

So, I've had this thing for awhile...

So, I've had this blog for awhile now, but I've never really used it. So I thought today that I would change that. I'll use this blog to talk a little bit about my adventures in web development and such things. Thoughts and comments on being human and/or a pastor will go on the other blog that I've had for some time and never actually utilized: So, there you have it. And here we go!