WherePost: update with new features & data!

I spent some time over the past couple of days pushing out a few minor updates to http://wherepost.ca. Here’s the rundown:

  • I imported, and now scrape, the KML file from Matthew Hoy’s mailbox-mapping project. This doubled the size of Where Post, and merges work doing the same thing.
  • I finally got around to adding in a “name” field, per David Eaves’ request of oh-so-long-ago. When you add a mailbox, if you enter a name or twitter handle (neither is linked anywhere), it should set a cookie that’ll remember who you are for next time.
  • I added in some social-sharing buttons. But because I actually dislike the way these look in general, and like the minimal interface WherePost presents, particularly on mobile, these currently only show on the “About” pages. I was thinking I should add these to the “add mailbox” results, but am holding off for now. I would appreciate it if you went ahead and used these though!
  • Caching: Most result-sets are now cached for an hour, although adding a  new mailbox will reset that cache. This is to account for the odd times when it looks like a bot is scraping the site and causes actual load.
  • Post Offices: Because the GooglePlaces API returns Yellow-pages data, whom I don’t like, and in a way that was pretty broken (you’d always get duplicates, for english/french names of the same place, even if the data was identical, plus it’d overlay a link to the Yellow Pages), I’ve removed this, and added in the ability to specify you’re adding a Post Office OR a Mailbox when adding a new location.
  • Feeds! It felt wrong to be pulling in all this data, but not providing an easy way to get it all back out for you to remix however you wanted. So you can now get all the data WherePost has in 2 formats: JSON & KML. To do this, simply go to: http://wherepost.ca/load. This page has a few query-string params you can pass in:
    1)FORMAT: you can pass ?format=kml to get the feed in KML format – JSON is the default.
    2)RECORDS: you can pass ?records=X, where X is an integer to limit the resultset. By default, WherePost will return up to 10,000 records.
    3)SW & NE: to limit your results to a particular region, pass sw & ne coordinate bounds. So, for instance, ?NE=49.350405349378214,-122.72854022578122&SW=49.16644496927114,-123.2606904943359 will *roughly* return all the results in Vancouver

I think that’s all the updates. Let me know if you run into any difficulties!

An Adaptive Image Technique: Thinking out loud

I’ve been playing a lot with responsive layouts, and the inevitable bugbear of adaptive images for responsive designs. Ethan Marcotte’s Fluid Images is what I’ve been playing with most, particularly via Matt Wilcox’s Adaptive Images script, and dreaming of the <picture> solution proposed by Mat Marquis. But and so, I’ve been doodling and fooling around with some ways of doing this, and have now come up with something that I like. I’m worried it’s actually terrible, but I’ve played with it enough that I’d like some feedback now.

I’ve come up with a a CSS3-only solution to adaptive images. For those who just want to see the example, go ahead. You can view source for all the details. The 3-col/2-col content is purely presentational.

To do this technique, I have an image tag, like so:

<img id="image1" src="/transparent.png" data-src="/fishing.jpg" alt="Man fishing on a log" />

That displays a 1 x 1 px transparent image. The data-src attribute is there to show the “true” source. In the CSS, I make use of the background-image property to provide media-query-attentive backgrounds & sizing.

The default declaration is:

#image1 {
background: url('/images/responsive/fishing-small.jpg') top center no-repeat;
width:100%;
max-height:67px;}

This is the “small device” version of the image. using media queries, I can also load in an hd/retina version for those devices:

@media only screen and (-webkit-device-pixel-ratio:2) {
#image1 {
background-image: url('/images/responsive/fishing-small.jpg')
}
}

I also can provide a mid-size version, a mid-size hd/retina version and a desktop version (or an infinite number of variations based on media queries).

@media only screen and (-webkit-device-pixel-ratio:2) and (min-width: 600px) {
#image1 {
background-image: url('/images/responsive/fishing-mid-retina.jpg')
}
}

To provide some fallback for IE users, I’ve included an IE-specific style:

<!--[if lt IE 9]>
<style>
#image1 {background-image: url('/images/responsive/fishing');width:100%;}
</style>
endif]-->

I like to start “mobile first” and use media-queries to “grow out” from there, but I could just as easily start with the largest version and work in – in which case the IE workaround wouldn’t be necessary.

Some of my thoughts on this technique

  • I like that this technique is really easy, really light-weight and doesn’t require any javascript or php.
  • The fact that I’m using a 1×1 transparent png fills me with the howling fantods as I remember spacer gifs.
  • Reusing a single tiny image all over the place has negligible bandwidth effects, but to be fair, I am making an “unnecessary” request to get it each time.
  • The data-src attribute is there to help with this. By default, things like Pinterest and Google Images could no longer grab your images with this technique (Whether that’s a good or bad thing, I leave to you). By leveraging a .htaccess rule, you could load in the data-src attribute as src for the pinterest or various bot user-agents.
  • This system could work pretty easily with automated CMS systems: using regex to replace a src attribute with a data-src attribute and injecting the 1×1 & an id is trivial, as is auto-generating some CSS to handle the variations of the image for each media-query break-point – but that’s definitely more work than not doing anything on the CMS -side and doing all replacements in JS or PHP on the front-end.
  • I like that I can easily update/replace any 1 image in the set without updating html source anywhere.
  • This feels “too easy” to me. All the other solutions I’ve found use some sort of scripting, be it PHP or JavaScript. That fact that there’s nothing to post to github here makes me feel like I’m doing something wrong.
  • Using background-image on images means that users can’t as easily download your image – right-click on an image and most browsers don’t give the option to “download background image” like they do on other elements.
  • I worry that this is doing something unexpected for accessibility – but mostly it should be ok, I think, as there’s an alt attribute, and will still work fine with a longdesc attribute.

I’m hoping for feedback on this for the world at large – as I said, I’m thinking out loud about this – it seems like a workable solution, so your feedback, thoughts, critique would be very much appreciated before I go do anything silly like use this in a client’s project.

Clearing line-breaks in SQL Server text fields

I was recently doing a data export for a client that included a bunch of ostensibly plain-text data fields that include any number of tabs, carriage returns and line feeds that were mucking up the csv file. And oh, how I fought that data to make it look nice. It was many, many hours past when it should have occurred to me that I finally thought to clean up the fields on export, NOT during import. Such is the problem with tunnel visions.

Anyway, for future reference, and easy way to clean up fields to simply replace the offending characters, using something along the following lines:


SELECT replace([field],char(10),'[replacementchar]') FROM [table]

So, I wanted to replace line feeds (char(10)) and carriage returns (char(13)), so I doubled that to:


SELECT replace(replace([field],char(10),' '),char(13),' ') FROM [table]

And it all worked beautifully. I’m writing & storing it here on the hopes if I run into this again, it’ll be googleable.

I’m in the forest, searching for the shore

Corkscrew tree
Searching for the way through

I’m percolating. Gestating. Mulling. Procrastinating. Whatever you want to call it, I’ve been in this mode for the better part of a week now. This happens regularly to me – something triggers my subconscious and it starts to take up and more and more mental resources. When I get phases like this, I’m sort of hopeless: I can’t remember anything, I’m  as distractable as – SQUIRREL! – my production nosedives. I almost never seem to get a warning that this is about to happen, just suddenly there I am – feeling like I’ve only got half brain-power. Typos go up. I’ll find myself staring off into space for who knows how long.

There’s a plus side to this. When I get like this it’s because I’m figuring something out. It sounds strange to say I don’t know what it is I’m figuring out, but historically, whatever pops into my head on the flip-side is fully formed, ready for me to copy down. I used to write papers this way – wander the streets aimlessly for a day or two, come home, sit down & type for a couple of hours before school, come home with an A paper shortly. Prior to agreeing to have kids I did this. When I wrote the first scheme for the original Pencilcase CMS, back in 1999, I couldn’t work for a week. Then in 1 sitting, I wrote the first version of the CMS over about 14 hours, with little to no edits.

So what have I been thinking about lately?  What’s going on back there? Well, there’s a bunch of stuff going on that are viable candidates for taking over my brain:

  • Moving: We might move away. We might buy  a new place in Vancouver. We definitely want to spend some time away – 3,6,9,12 months, who knows. The plan for that needs to resolve itself.
  • Community: I’ve been thinking a lot about the cross-sections of digital and real-world communities. My experience as a terribly shy human vs. a fairly chatty avatar. How to correlate the two, how to bridge the various communities I participate in on- and off-line.
  • CMS: The current world of CMS’s don’t really match the type of tools many of my clients need. Nor do the social CRMs. Nor does the issue-tracking software we and they all use. But they all form part of a solution to a real issue. And I feel like I’m on the hunt for a lightweight suite to handle lots of basic needs.
  • mobile & responsive design: Having now built a couple of responsive sites, in addition to 2 distinct “mobile” sites in the last few months, there’s a path there that I haven’t quite found. This is closely related to the CMS problem: solving the issue of ongoing site existence & emerging break points & client-control of content and so on.

 

So I’m in the forest, I’m looking for the path. I keep catching glimpses of the shore out there, where the horizon is clear and present, but I’m not there yet & it’s frustrating.

WherePost: now even more useful!

Since I launched Where Post? on Friday morning, the response has been pretty gratifying. My thanks to David Eaves for his nice post about Where Post? this morning. I launched the app with a total of 28 mailboxes and 1 post office. Since then, 29 contributors have added in another 400-odd mailboxes and about 20 post offices. (NB: ‘contributors’ are identified currently via a combination of a cookie & IP Address – so it’s not exact, but close enough).

I’m please to announce a very useful addition: Every post office in the Google Maps database, everywhere, pulled in from the Google Places API. I had noticed while adding post offices that there was often an envelope icon already in the map where I wanted to add a post office. After some digging this afternoon, I was able to pull in the places API to just get all the places that identify as a post office.

There’s a few oddities to figure out:

  1. Often each post office is listed 2 or 3 times at least in Canada: The french name & english name appear to be 2 places, and sometimes the post office in english, the post office in french and the store containing the post office are all listed. Odd, and I haven’t yet figured a way to filter this, but still pretty nice.
  2. I have a rate limit of 100,000 queries a day. Given that each time you see the “loading mailboxes” message there’s a query to Google, there’s a distinct possibility I’ll reach that. For now not a worry, but definitely a scaling/caching issue to think about in the future.
  3. Integrating with the “nearest” function. Currently, the “nearest” mailbox is simply pulled from an SQL query – which means that post offices, coming in from Google, are ignored. There’s likely a way to merge the two, but nothing’s coming to mind at the moment.

As always, if you have any suggestions, comments or anything else, please let me know!

Introducing: Where Post?

Where Post? is a small web-app I wrote over a couple of evenings this week to serve a very particular purpose: To help me, and anyone else, find their nearest mailbox.

The site should work on iPhones, Windows Phones & Androids. It’s meant to run as an app, so you can install it to your home screen for the greatest effect.

There are currently 2 ways of adding new mailboxes – as time permits, I’ll add more:

  1. In the app: to add a new mailbox, click on the “+” at bottom right, then click on the map where you know there’s a box. If you like, add some notes like “next to the garbage can” or “across the street from the pink house” to help people find it.
  2. Instagram: You can also take a picture of a mailbox on instagram, tag it #wherepost and include a location. A mailbox, with the photo you took, will be added at that location. Your photo’s caption will become the notes for the mailbox. I think it’s a fun use of the instagram API.

Of course, you can also simply find directions to the nearest mailbox to you. Just click on the magnifying glass, and Where Post? will provide you with walking directions to the nearest box (within 2km).

The app is very much a work-in-progress – to come is the ability to add in Post Offices, as well as pick-up/drop-off locations for the various courier companies, so that eventually, it’s a one-stop place to go to find where to send something from. Any and all feedback is much appreciated. In particular, if you know how to change the cursor icon in Google Maps v3, I’d love to know how.

So please have a look, play with it and send me any feedback you might have!

Strange Safari rendering bug, please help!

We do work for GSPRushfit, including managing their cart. Currently, I’m having trouble with a bizarre bug in their cart.

When customers update their Shipping Country, a few things happen behind the scenes:

  1. If the country has a few shipping options, a <select> appears next to the Shipping title. Otherwise, the shipping method is displayed.
  2. The shipping/taxes & totals get updated.

On Chrome, Firefox, IE, Opera, Safari older than 5.0 and Safari for Windows, this works totally fine.

However, on Safari 5.0+ on OS X, the displayed text doesn’t update. What we’re doing is updating the “name” of the shipping method, via jQuery. After some ajax calls, there’s a little function that simply updates with an .html(); edit. Here’s what it should do:

Shipping to Europe on Most Browsers

When I update my country to “South Africa” in Firefox, I get:

Shipping Internationally

But when I do this in Safari 5.0+ on a Mac, I get this:

What Safari Shows

As you can see – somewhat borked. It doesn’t clear the old text and update the new text. If you highlight with a mouse, the updated text shows correctly. Viewing source code, all has been written. It’s purely a rendering issue. In the displayed case above, this isn’t the end of the world. However, alongside the shipping “name” we also update more pertinent information such as Shipping Amount, taxes & of course the order total. The source code is all  updated, so if/when people actually click “place order”, all that is correct. But understandably we’re seeing a fairly high abandonment rate from Safari users.

To make matters worse, this code works perfectly on both my local dev environment (Mac, Apache) AND our staging environment (Windows 2008 Server, IIS). It’s only on production that its failing. Things I’ve isolated as NOT being the cause:

  • SSL: Same bug appears whether over SSL or not.
  • jQuery version: I’ve updated jQuery, rolled back jQuery.
  • CSS/JS served locally vs. from CDN: same.

We have a variety of rules for caching/not caching on the server to help speed up the site. I’ve turned on/off a variety of these with no success – but that remains the major difference between dev, staging & live.

Has anyone seen this? Could you, if you have 5 minutes, please try and recreate the issue and let me know if things work for you or not? To recreate:

  1. Using Safari, go to http://www.gsprushfit.com
  2. Click on the big red “Click here” at top right.
  3. Click on “Add to Cart”
  4. Set your country to, say, Canada in the Shipping section.
  5. Change your country to “Norway”
  6. Change your country to “South Africa”

With each country-change, both the info next to where it says “Shipping” should change. Above, at right, the shipping & Handling total should update too.

Please let me know in comments if this works for you.

A Milestone, of sorts

Earlier this week, the NPA launched their new website for the fall 2011 civic election campaign. With this, I’ve reached a personal milestone: Over the past decade, I’ve built election campaign websites for each of the Vancouver civic parties – COPE in 2002, then Vision‘s in 2005 and now the NPA this year. I like to think it shows a professional non-partisan manner than is to be commended. You might say it’s pure capitalism at work (or worse yet that I’m a sell-out). Regardless of your opinion, given how entrenched political parties and their service providers seem to be, I’m quite proud that these various groups have all chose me &/or my company over the years to provide them with professional, quality web-related services.

For this latest project, we were purely the technical team – I’ll have no hand in the ongoing messaging or marketing. Design & project management was provided by our frequent collaborators at Myron Advertising + Design.

At the provincial level, this year I’ve also completed the BC trifecta: I’ve built sites for each of the BC Liberals, BC NDP, and waaaay back in the 90s, the BC Green Party.

So I’m an experienced campaign website builder. If you need a website for your campaign, let me know.

User Control over the Granularity of Location Services

I use a lot of location services on my phone: when I tweet, more often than not, I include my location. I love geo-tagging my photos, so I generally include my location when using instagram, Hipstamatic, etc. & I regularly check in on Gowalla & Foursquare. So I’m not averse to sharing my location in general. I actually quite like it. That being said, I often wish I could be less specific about where I am. I don’t think it would be too hard to add a little slider, or some interface, to provide some scale.

By default, we send data for a point. But what if I could choose to send data at a variety of scales: point, neighborhood, city, region, province/state.

I suppose the particular use-case for this to avoid sending the exact location of my house – I do it, somewhat inadvertently, but I could imagine not ever wanting to do it. But still, letting people know that I am currently in South Vancouver (neighborhood), or Vancouver (city), or Lower Mainland (region), or BC (province/state), rather than my location within 100 metres should be perfectly acceptable data points – and gives me some control over the specificity of my data points.

In the above example, it is up to the app developer to provide scale/fudge-factor options. But we could abstract this farther, and make it a device-wide setting. My phone, via GPS, can always tell where I am. What if I could, device-wide, say “When in these areas, broadcast my location with much less specificity. That way, when I’m actually at home, it could automatically just send, say “Vancouver”, rather than my location. And by letting me choose where I want to reduce specificity, I still have the control – I set it up in my settings or preferences.

I suspect there’s a variety of implementation details that I haven’t really thought through, but I do think that this is an issue that if not the device (/OS) makers need to address, than app-developers do. Let me participate in location services, but at my security level – not what you’d ideally want. It’s a users-first approach to location, rather than data-first.

Optimizing site speed: a case study

One of our clients runs an eCommerce site selling workout videos on behalf of George St.Pierre, the UFC Fighter, called GSPRushfit. The videos sell well, but they’re quite rightly always looking at ways to increase sales. A few weeks ago, we ran a study to see how fast the site loaded, and how that affected conversion rates (sales). I wrote a post about how we measure that a couple of weeks ago. The result of this was that we could see that page loaded reasonably well, but not fantastically. Across all pages, the average load speed was 3.2 seconds. What was eye-opening was that pages that loaded in 1.5 seconds or less converted at about twice the rate of pages loading 1.5-5 seconds. There was  a further dip between 5-10 seconds. So with this data in-hand, I started to look for ways to increase page load speed. I came up with a laundry list of things to do. Most of these are suggested by YSlow:

  1. Remove inline JS/CSS: We didn’t have a lot, but there was some unnecessary inline scripting. These were moved into the existing CSS & JS files. I think I added about 50 lines of code. Not a lot, but helpful. There’s still some inline JS & CSS that’s being written dynamically by coldfusion, but all the ‘static’ code was moved into one.
  2. Minify Files: This doesn’t do a lot, but does compress files slightly. I think I was able to reduce our JS file by 30% & our CSS by 15%. Not a lot, but still helpful. I use an app called Smaller, which I’m a big fan of. While YSlow suggests you combine files, I chose not to – the reduction in requests didn’t offset the problems for us in doing this.
  3. Reduce Image Size. The site is fairly graphically intensive – large background images & lots of alpha-transparency PNGs. When I started, the homepage was loading just under 1.2MB in images, either as CSS backgrounds or inline. Without (to my eye) any noticeable loss of quality I was able to re-cut those same images to about 700KB in size.
  4. Use a CDN: The site loads video, which we call from a CDN. But the static assets (CSS, JS, Images) weren’t being pulled. This was originally because the CDN doesn’t support being called over SSL. But it only took a little scripting to have every image load from the CDN while not on SSL, from the local server while over SSL. This, as you’d expect, greatly improved the response time – by about 0.2 seconds on average.
  5. Query Caching: this one is likely a no-brainer, but the effects were stunning. All the content is query-driven, generated by our CMS. But it doesn’t change terribly often. So I started caching all the queries. This alone dropped our page load time by nearly a full second on some pages. And to maintain the usefulness of a CMS, I wrote an update to clear specific queries in the cache when new content was published.
  6. GZip: Again, likely something I should have already been doing, but to be honest, I had no idea how to accomplish this on IIS. So I figured that out and requested that the server gzip static assets (JS, CSS & HTML files).
  7. Far-future expires headers. Because very few of our images change frequently, I set an expiry date of 1 year in the future. I likewise set a cache-control variable of the same time frame. Which, should, in theory, reduce requests and allow clients revisiting to just use their local cache more. I of course added a programmatic way to clear that cache as well for when we change content or edit the JS or whatever.
  8. Clean up markup: While I was doing all the rest, I also cleaned up the markup somewhat – not a lot, but again, we were aiming to eliminate every extraneous byte.

So, as you can see, we sort of threw the kitchen sink at it to see what stuck. In retrospect, I wish I had made these updates one at a time to measure what sort of an impact (if any) each one had. There’s only a couple I can see clear before & after differences, which were mentioned above. So for everyone out there, which of these were the most effective?

  1. Caching Dynamic Content: Even on a CMS-driven site, most content doesn’t change constantly. But if you can eliminate trips to the DB server to make a call, that’s time saved. Even if you cache a recordset for a few minutes or even just a few seconds, on a high-traffic site you can see some real impressive gains. We cache queries on this site for 10 days – but can selectively update specific queries if a user makes an edit in the CMS – sort of a best of both worlds right now. This somewhat depends on having a powerful server – but hosting hardware & memory are pretty cheap these days. There’s no reason not to make use of it.
  2. Crushing images: When building the site, I did my best to optimize file size as I exported out of Photoshop. but with a few hours in Fireworks I was able to essentially cut the size of the images in half with no real visual impact. A hat-tip to Dave Shea for the suggestion of using Fireworks.
  3. Pushing Content to a CDN: this is the head-smacking no-brainer that I don’t know why wasn’t already part of our standard workflow on all sites. As I wrote above, we gained about 0.2 seconds by doing this – which doesn’t sound like a lot, but it’s noticeable in practice.

The nice thing about this exercise was that it shook up how I built sites, how our internal CMS runs and how I configure both our IIS and Apache servers to all run slightly more efficiently. I suspect that I could eke out a few more milliseconds by playing more with the server settings itself, but I’m satisfied for now with how this worked out.