A little bit about filesize units (KB, MB, etc)

I helped my 14-year old son with his homework today and there was a question about how to convert from Kilobytes (KB) to Megabytes (MB). My instinct was to tell him to divide by 1024 (the more technically accurate version of a KB) but we both decided the answer they wanted was 1,000.

In my work creating websites and web applications we sometimes report on filesizes, usually in human-readable formats such as reporting on the filesize in MB. For example, a document listing may include the filesize to give the user an idea of how long a download may take.

So this made me think about how we calculate human-readable versions of filesizes on websites. In the past we tend to divide bytes by (1024 * 1024) to get to MB. Now I wasn’t so sure. So I had a bit of a read around.

Binary and decimal units

Historically computers have always used binary units, since that’s how computers work. At their simplest level everything is either a 1 or a 0.

Traditionally a kilobyte is 1024 bytes, a megabyte is 1024 kilobytes, a gigabyte 1024 megabytes, and so on. This is called base 2 (or binary) since these numbers are all a power of 2 (1024 = 210 bytes).

As computers became more mainstream people naturally assumed a kilobyte meant 1000 bytes, a megabyte 1000 kilobytes, since base 10 (or decimal) is what we’re used to as humans.

So we currently have two ways to describe a kilobyte: decimal (1,000 bytes) or binary (1,024 bytes).

Messy real world definitions

There seems to be a lot of confusion in computing with developers often using the “more accurate” binary unit to calculate file sizes and others using the decimal unit.

In the early days of the web most computers used binary units to report filesizes. This has changed over time.

It turns out hard drive manufacturers refer to storage sizes using the decimal format. So a 100 MB hard drive is actually 100 * 1000 KB (rather than 100 * 1024 KB). This results in a smaller storage space than if you used the binary unit to calculate storage size (e.g. 1 GB = 1,000,000,000 bytes in decimal or 1,073,741,824 bytes in binary, this is around 7% smaller). Good for sales, less good for the consumer.

There’s even a Wikipedia page on the confusion this has created. Interestingly this notes that the US legal system has decided “1 GB = 1,000,000,000 bytes (the decimal definition) rather than the binary definition.”

There are also standards. IEC 80000-13, published in 2008, defines a kibibyte (or KiB) as 1024 bytes and a kilobyte (KB) as 1000 bytes.

According to the Institute of Electrical and Electronics Engineers (IEEE) the decimal format should be used as standard unless noted in a case-by-case basis (see Historical Context on this NIST reference page). This is also known as SI, The International System of Units, which defines the prefix killo as 1,000.

So technically you should write KiB if you mean 1024 bytes. But it turns out very few people do this, and everyone just sticks to kilobytes or KB whether they mean decimal or binary.

So today we’re still stuck with some people using KB = 1024 bytes and some people using KB = 1000 bytes. Yay!

However, clearly most people don’t care. And storage sizes are so large now most people don’t really notice the differences. Unless you’re a computer or web engineer who has to do calculations on this sort of thing.

What do modern operating systems use?

Well, here’s where it gets interesting.

In my early days of web development (which started around 1999) I used a Windows PC, these days I use a Mac. While hard drives advertised their size in decimal units, Windows itself reported filesizes in binary. So in practical terms a 1 GB hard drive actually had less space for file storage on it (around 953 MB available space). I remember that annoying me!

In the early days of Macs and smartphones they also reported filesizes in binary units. So it made sense that most people used binary units to report filesizes on web apps.

From 2009 Mac switched to reporting file sizes in decimal (with Mac OS X Snow Leopard, presumably in response to the IEC standard). This didn’t happen until 2017 for iOS and Android.

Today Ubuntu Linux, Mac OS, iOS and Android use decimal for file storage sizes. Windows, as far as I’m aware, still uses binary units. However, to spice things up Microsoft’s cloud office service 365 uses decimal units when referring to cloud storage size!

So today if you have a file which is 500,000 bytes in size this would report as 488 KB (binary) on Windows and 500 KB (decimal) on Macs, Ubuntu Linux and modern smartphones.

What works for users?

Which is right? To be honest, I don’t think that matters. What’s more important is which makes more sense for your users.

Most web development resources still tell you to use a binary units to convert between file storage sizes (e.g. bytes to KB).

But as you can see, almost everyone else uses decimal units in the real world (except for Windows OS – but even Microsoft uses decimal for their cross-platform 365 service).

When building web applications it’s always best to do what is best for your users. So now, most of the time I think it makes more sense to report filesizes using decimal units rather than binary (so 1,000 bytes = 1 KB). Which is the opposite to what I thought before I started writing this post!

Just to make things fun, other measurements which use kilobytes actually do use binary units consistently, computer memory (or RAM) being the obvious example. As far as I know every system out there uses binary units for measuring memory!

If this is all too much, I’ll leave you with the excellent xkcd web comic, kilobyte edition:

Error monitoring tools and UK/European data storage for GDPR compliance

At Studio 24 we work with a lot of government and public sector clients, who are understandly keen to comply with GDPR and are therefore careful about where data is sent and stored.

There is a strong preference to use services that store all data within UK or the European Economic Area (EEA).

This is an issue for many SaaS products since most of them store data in the US or Canada. While there is the EU-US Privacy Shield agreement this has become uncertain after Brexit.

Where possible, we aim to use EAA or UK hosted data for public sector digital services. Where that’s not possible we can use non-EU hosted data for services, but we need to justify this with our clients.

Two tools we currently use for error reporting and monitoring are Bugsnag and Usersnap. After my review I discovered Bugsnag is hosted in the US, though Usersnap is hosted in Europe. A summary of my research on data storage locations is below.

In addition I’ve also added notes on where you can strip indentifying user data from external data storage. This can be helpful for data privacy.

Hosted in EAA

Usersnap

Data hosted on AWS in Europe (Germany or Ireland). GDPR docs are a bit sparse but you can request more details via email. It’s not really possible to strip data via Usersnap due to how it works (on demand screenshot tool rather than automated monitoring).

New Relic

It is possible to select EU data storage when setting up your account. New Relic publish information on security and privacy. HTTP parameters are not logged by default to avoid logging user data.

DataDog

You can use the EU site to ensure all data is stored within the EU. View GDPR docs.

Hosted in US only

Airbrake

Data hosted in USA. GDPR documentation is available on request.

Bugsnag

Data hosted on Google Cloud in USA. Bugsnag does have a detailed Data Processing Agreement and some examples on how to delete user data for data deletion requests which is nice to see.

Loggly

As far as I can tell data is stored in USA.

LogRocket

Data is stored in USA. Lots of options to exclude sensitive data.

Raygun

Data is stored in USA. You can remove sensitive data.

Rollbar

Data is stored in USA. There is some docs on scrubbing data in JS.

Sentry

Data is stored in USA. See data privacy docs. Sentry has data scrubbing tools.

WP Engine and Atlas Headless CMS

WP Engine recently announced the launch of their Headless CMS hosting and frontend JavaScript stack, Atlas. This sounds really interesting so I took a look at their recent DE{CODE} 2021 conference videos on this topic to find out more.

As a little background we’ve used WP Engine at Studio 24 for a number of years, for enterprise WordPress hosting for some of our larger clients. We’ve always found their WordPress hosting solution rock solid, they have a bunch of very clever engineers developing solutions. When I attended a previous (in-person) WP Engine conference I was always surprised they hadn’t developed their own software to complement WordPress. Looks like that has now changed.

Atlas and headless

The keynote talk was a good summary by founder Jason Cohen on the reasons for investing in Headless. I liked the stat that 64% of enterprises are using headless. Many of us have been using decoupled architectures for years – just without calling it headless!

The key selling points are performance (with 10x page speeds than traditional WordPress), better security, ability to redesign without re-implementing your CMS, and developer freedom to work with any modern framework – as long it’s in JavaScript (for Atlas).

Performance and security I thoroughly agree with. Stripping back the frontend to only what’s required is clearly a great way to improve security and reduce attack vectors.

The ability to redesign a site using headless without having to reimplement your work in a CMS is also a big selling point, however, this assumes a solid frontend that does not also need to significantly change. I think this is the idea behind WP Engine’s Atlas JS framework (which credit to WP Engine is open source).

Developer preference for “modern tools” was mentioned a lot as a big benefit. I’m not so convinced here. Trends in software tools come and go, developers often want to use the next shiny thing, but businesses need to get things done and maintain websites over the long-term. Decisions on your tech stack should not just be down to preferences of your current developers. I’m not a big fan of JavaScript “eating the web” and I think it’s perfectly valid for websites and web apps to be developed in traditional (popular, well tested and maintainable) server-side languages such as PHP, Python or Ruby. If this is you, then WP Engine’s software based heavily on the JS stack is not for you.

Having to force front-end developers to code directly in JS is becoming a big issue in web development (see articles by Chris Coyier, Brad Frost). I think a full JavaScript stack is great if your team can support it. If it can’t then it can be an obstacle to business efficiency and can cause barriers to entry for people coming into the industry.

Front-end development is a highly skilled profession, which includes a wide range of client-side tech and skills such as usability, accessibility, performance, strategic thinking and device compatibility testing. Front-end developers need to know JavaScript, but they don’t necessarily need to be JS programmers.

I find it ironic one benefit of headless is decoupling, yet with the JavaScript/React approach HTML/CSS is tightly coupled with JavaScript. To the point pure HTML/CSS front-end developers can struggle to be able to work with it.

I thought WP Engine’s approach to solving the preview problem in WordPress was interesting. They use OAuth to get users to login to WordPress before they can view preview content, both helping authenticate to the preview site and authenticate the preview API requests to WordPress.

Benefits of exploring headless

Decode hosted a panel discussion on “Navigating the risks to reap the rewards of Headless WordPress” with perspectives from agency 10UP, the creator of WPGraphQL and WP Engine.

Phil Crumm from 10UP noted clients who are most interested in headless are those with a clear functional use case, those interested in security, or those who want more flexibility for the future. Content portability and flexibility is important. “While we don’t know what the future will look like, the decoupled architecture is probably going to be the best way to get there safely.”

Matt Landers from WP Engine talked about when is headless the right approach. His answer was: scalabilty, developer resources, security and data integrations.

He said “do you have the developers resources to do this? It’s a lot more developer heavy. You can’t rely on community built themes and plugins like you can in a traditional WordPress world. You’re going to have to depend on a development team to build that our for you. [maintenence costs] are coming down as we figure out how to integrate the front-end and back-end more seamlessly.”

Jason Bahl from WPGraphQL talked about the benefits of componentisation which could reduce production and maintenance costs in the future. Creating re-usable components that use GraphQL data fragments to output content that can be re-used on projects. I admit I’m not that familiar with how this works in React. We use Timber a lot at Studio 24, an excellent Twig templating system for WordPress from Upstatement. This is another way to support componentisation in WordPress today, though in practise it can he hard to actually do this across different client projects.

On the future of headless in WordPress Jason noted we’re “super early in this game. The more tooling we see built, the more adoption we’ll see.” There seems to be lots happening on this front across lots of different technologies, which is always great to see.

Gutenberg and headless

The final Decode talk I watched was “The Fast Track to Mastering Modern WordPress” with Rob Stinson of WP Engine and Carrie Dils. The talk focussed on Gutenberg and the future of full site editing that is coming to WordPress in 2021.

Full site editing allows users to edit and layout an entire site via Gutenberg, not just the content area but also headers, footers and the sidebar. Carrie went on to explain how full site editing themes works, moving from template files and template parts to “block template parts” which interestingly seem to be straightforward HTML files (which sounds good). It’s not clear to me if all the dynamic content is then in Gutenberg, which splits up all HTML code into React files.

More details on how this works is available at fullsiteediting.com, which notes HTML files may actually be saved as PHP files to support text translation and dynamic URLs. This feels like a pretty major architectural decision. I’d love to see a proper templating engine like Twig or Handlebars in WordPress! It would certainly help developer efficiency and a large breaking change like Gutenberg is the time to do it (if ever).

Carrie went on to show how the editing experience works in WordPress. It’s very similar to editing content in Gutenberg, allowing users to edit areas of the site such as the footer.

As for the timeline of all this, full site editing beta is available in WordPress 5.7 in March 2021. This is expected to be in WordPress core by 5.8 in June 2021, pretty soon! Given the disruption caused by the original Gutenberg release I hope this is a smooth and opt-in transition.

So does the Gutenberg layout need to look the same as the frontend? It’s not clear to me what the WordPress recommendation is on this – but it certainly feels like that’s what Gutenberg wants users to experience.

The obvious issue for headless is it’s decoupled, so laying out a site in Gutenberg cannot easily look the same as the frontend, if you’re using WordPress as a headless CMS. Which, to be honest, I think is good.

A CMS should not have to show content exactly as it will display on the frontend, but should give a good enough representation to the user to allow them to manage content effectively.

Rob Stinson then talked about making custom blocks, an area of great interest to me since I see this as essential for digital agencies taking up Gutenberg. Rob talked about a plugin called Genesis Custom Blocks.

The UI for adding a new custom block appeared to use Gutenberg too. This allows you to setup editor fields (that appear in Gutenberg) and inspector fields (that appear in the inspector sidebar).

I wonder how the custom blocks definition is stored, in the database or filesystem? All projects we use need to be stored in version control so we can deploy them across environments effectively.

When adding a custom block a PHP and CSS file is created which controls how the block appears in Gutenberg (when previewing rather than editing). This certainly makes it easier than embedding HTML in React. This is the same HTML and CSS as used on the frontend.

In the demo Rob noted the “styles on the frontend don’t perfectly match up what’s in the editor, but this can be fixed up.” In our previous experience we found it’s very time consuming, and thus not commercially feasible, to try to match CSS on the CMS backend with the frontend website. This is true for custom CSS themes which are the bread and butter of many digital agencies work. And I would have thought even more true for headless CMS setups where the CSS isn’t even shared.

One great point of the Genesis Custom Block plugin is you do not need to use the Genesis framework to use it. You’re free to use your own HTML and CSS.

We are still concerned about accessibility of the Gutenberg editing interface at Studio 24 and it contributed to our decision not to use WordPress for the new W3C website. I hope the Genesis Custom Block developers have considered this.

The aim of the Gutenberg project is to “make writing rich posts effortless.” That’s a mighty fine aim, but I hope WordPress can achieve this without breaking the web for others (be they accessibility users or front-end devs less comfortable writing HTML templates in JavaScript).

Final notes

Atlas sounds like a really interesting develoment from WP Engine and I welcome more tools in the area of Headless CMS. It’s sure to help more people get involved in this exiciting area of digital technology.

It’s a shame the name Atlas is so close to HumanMade’s Altis, WP Engine certainly would have been aware of this. However, Altis is software that extends WordPress more dramatically with DXP/enterprise features. Atlas, for now, is primarily a solution to help make better Headless CMS sites with WordPress.

Weeknotes: 31 Jan

I’m going to try to start keeping weeknotes. It’s a good way to reflect on the past week, get into a regular habit of writing, and is a nice way of working in the open.

My past week has been pretty busy, in general January has been a pretty crazy start to the new decade with lots going on! Monday and Thursday in London for various client meetings in and around Westminster. We’re currently working with the University of Cambridge on a site for their Alumni magazine and a range of WordPress sites for Parliament.

Had an exciting potential new client call on Tuesday, more on that later!

Had a meeting for the Cambridge Film Trust. We’ve going through a lot of planning for the 2020 festival at present.

I took a look at how to multilingual properly in WordPress using a multisite approach. So far initial work is showing this is a far more sane way of doing it in WordPress. Our previous efforts had included tools such as WPML which we found can get into real difficulties once you start having more complex content.

We’re currently recruiting for a new web developer, so spent some time going through CVs and emails from candidates. It’s a pretty time consuming process, but essential to do properly to get the right people.

Ended the week with a board game night at work, where we played Dixit, a fantastic card game I’d not played before. One person plays the storyteller, chooses one of their cards (with beautiful imagery), picks a phrase, and everyone else has to choose which of their cards best fits. The idea is to guess which card is the storytellers. Will have to buy it for the family to play!

Local development with Valet

I’ve used MAMP Pro with my team for local development for many years. It’s a convenient tool, but it’s often slow (especially the CLI) and often hangs or crashes. I’ve been looking around for alternatives for quite some time, so after it was recommended by Zuzana thought I’d take a look at Laravel’s local dev environment Valet.

Installing Valet

Valet installs a lightweight set of tools to run websites on your Mac via Homebrew. This feels like a good approach to me. I’m not sure we really need the complexity of virtual machines, the code we write tends to work fine on a Mac. Having tools locally installed via Homebrew is fast and convenient.

I started by updating Homebrew to make sure my local packages are up to date:

brew update

Then installed Valet globally via Composer, and installed it:

composer global require laravel/valet
valet install

MySQL

Next I need to install MySQL since this will no longer be available via MAMP:

brew install mysql@5.7
brew services start mysql@5.7

This has installed MySQL, I now want to secure the local root password (by default it is empty):

/usr/local/Cellar/mysql@5.7/5.7.25/bin/mysql_secure_installation

(please note the path to mysql_secure_installation may change depending on your version of MySQL).

Next, I want to connect to the database to install a copy of my own blog WordPress database for local testing. We use the excellent SequelPro for managing MySQL.

It’s easy enough to connect to localhost using the details:

  • MySQL Host: 127.0.01
  • Username: root
  • Password: (the secure password I set above)

This worked fine, I created a new database for my local test site and imported a recent SQL backup.

Serving a site

The default method to serve sites in Valet is to use valet park to serve all sub-folders in ~/Sites as websites via URLs in the format folder-name.test

For example, http://wordpress.test/ would serve a website with the document root of ~/Sites/wordpress

This isn’t really how we work, most projects have a “web” sub-folder inside a project to allow for files outside of the document root. So for setting up local sites I’ll need to use the valet link command to set each site up manually.

To create a link from ~/Sites/simonrjones.net/web for the host simonrjones.test it’s simple enough:

cd ~/Sites/simonrjones.net/web
valet link simonrjones

I can verify the sites setup in Valet via:

valet links

The final step is to ensure WordPress knows about this new local test URL, otherwise WordPress has a habit of redirecting to what it thinks is the correct blog URL.

I use the multi-environment config on my personal blog so this is easily achieved by changing the ‘domain’ value for the ‘development’ environment in the file wp-config.env.php

 'development' => [
        'domain' => 'simonrjones.test',

I can test this now via the URL http://simonrjones.test/ – which worked first time!

Testing with a more complex setup

Next, I tried this with one of our client’s WordPress multi-site installs, which is a little more complex. The site is hosted at WP Engine so it uses the standard wp-config.php setup (not multi-environment config).

CD’ing into the project folder and running valet link is enough to make the site run from a local *.test URL. However, trying to access the site doesn’t work and WordPress multi-site redirects the request to what it thinks is the correct URL.

I have WP CLI installed, so I tried to use this to update old site URLs to the new *.test ones, but I couldn’t see a way to reliably do this for multi-sites. So I went back to straightforward SQL!

UPDATE wp_options SET option_value='http://clientdomain.test' WHERE option_name='siteurl' OR option_name='home';

UPDATE wp_site SET domain='clientdomain.test' WHERE id=1;

UPDATE wp_sitemeta SET meta_value='http://clientdomain.test/' WHERE meta_key='siteurl';

UPDATE wp_blogs SET domain='clientdomain.test' WHERE blog_id=1;
UPDATE wp_blogs SET domain='site1.clientdomain.test' WHERE blog_id=2;
UPDATE wp_blogs SET domain='site2.clientdomain.test' WHERE blog_id=3;
UPDATE wp_blogs SET domain='site3.clientdomain.test' WHERE blog_id=4;
UPDATE wp_blogs SET domain='site4.clientdomain.test' WHERE blog_id=5;

UPDATE wp_2_options SET option_value='http://site1.clientdomain.test' WHERE option_name='siteurl' OR option_name='home';
UPDATE wp_3_options SET option_value='http://site2.clientdomain.test' WHERE option_name='siteurl' OR option_name='home';
UPDATE wp_4_options SET option_value='http://site3.clientdomain.test' WHERE option_name='siteurl' OR option_name='home';
UPDATE wp_5_options SET option_value='http://site4.clientdomain.test' WHERE option_name='siteurl' OR option_name='home';

In addition I had to set the wp-config.php setting to the default site URL

define('DOMAIN_CURRENT_SITE', 'clientdomain.test');

Finally, I had to setup additional URLs to serve the multi-site URLs in Valet:

valet link site1.clientdomain.test
valet link site2.clientdomain.test
valet link site3.clientdomain.test
valet link site4.clientdomain.test

The above appeared to serve the multi-site install from the different URLs. However, when I attempted to login to WordPress I was presented with a white screen of death.

I tried valet log and viewed the Nginx error log. Nothing. I refreshed the page a few times and WordPress came up. Navigating around most WordPress admin pages seemed to work, but I got the occassional slow page load or white page. On the front-end pages in sub-folders often seemed to not work on the first request. This is a little disoncerting. This may be caused by the complex WordPress setup, one of the installed plugins, or simply the fact it’s using Nginx and we’ve built these sites to work on Apache. It’s something I’ll have to look into.

Next steps

Valet certainly seems to be a useful tool and one that is very quick to setup. Things I’d like to look at next..

  • Review the secure HTTPS option in Valet
  • Take a look at Valet+, though it looks a little out of date compared with Valet.
  • Can we customise the webserver used so it more closely matches our production environment (we use Apache instead of Nginx FastCGI)?
  • Is there a WP CLI plugin to help change multi-site URLs? If not, we should write one!

If we can use a lightweight dev setup such as Valet it would be nice. Though as with everything in tech, it’s clearly not completely plain sailing! MAMP Pro is certainly easier to use and setup, though it’s the speed (web requests and CLI) which I’m starting to want improved more than MAMP appears able to offer.

Creativity, Playdate and making things

It was with some delight I opened this month’s Edge magazine which I found on my table this evening. This month they have an exclusive on a nifty new handheld console built by software developers Panic called Playdate.

From the article, Playdate looks awesome. It’s a a cute yellow handheld with a high-quality LCD screen, simple controls and an intriguing crank, designed for fun gaming experiences written by indie developers. It looks like nothing I’ve played before. The concept seems pretty crazy, but that seems to be the point.

Edge #333 - Playdate handheld console

With a series of fun, creative, offbeat games released every Monday over wifi once you boot the Playdate up, the concept seems to be genuinely original and born out of a desire to just have fun and make stuff. I can’t wait to get my hands on one!

If you’re a fan of gaming the Edge article is well worth the price of the magazine. Reading through a few things stuck out for me.

I’ve been aware of Panic for many years. We use their software at Studio 24 (their excellent file transfer tool) and I’ve always been struck by their attention to detail and quality of design. I eagerly played through Firewatch when it was released on the Playstation 4. A fantastic game beautifully designed, full of atmosphere and good storytelling. The sort of game I really enjoy.

From the Edge article, Cabel Sasser (co-founder) explains what he believes was the origin moment for the project. He talks about Panic being a 20-person company with revenue around the two million dollar mark. He woke up one morning with “a bit of an existential crisis”. He had a profitable, independent company without external investors, with a team that could put their hand to a range of things – not just the same sort of work they’ve been focussed on for so long.

I realised we don’t have to keep doing the exact same thing that we’re always doing – this ceaseless develoment-and-support cycle. We can do some weird things too, as long as we’re not betting the farm. If we have this chance, we should probably start doing some things that take us to new places. Maybe they’ll work, maybe they won’t. But if we’re not doing that, we’re just wasting out lives.

Cabel Sasser, Co-founder, Panic

This kinda resonates with me. I run a digital agency, not far off the same size and revenue as Panic. I started as a creator too, hacking together web pages and making software. Making things has always been part of my make-up. But with running an agency that often takes a back seat, and spare time pretty much disappears if you’re not careful. It’s fantastic to see other companies of a similar size spin out internal projects into something so impressive.

Another lovely quote is:

Running this company now, I feel almost like a different person. I feel like a huge part of that is finding again how important it is to me and for everyone here to just make things, and be proud and excited about it.

Cabel Sasser, Co-founder, Panic

It’s a bold and exciting move for Panic, but one that I’m sure will give them new opportunities and rewards. There seems to be a real movement for more interesting, playful gaming at present. I hope Playdate does really well.

I’ve seen this in other companies recently too. Only this week I read that the excellent WordPress agency Human Made released Altis, their own “next generation” CMS platform built on top of WordPress. From my brief review it looks like a really interesting set of content and development tools to make creating engaging websites way easier. It sounds like an interesting venture for Human Made.

It also reminds me of Brendan Dawes’s talk at New Adventures about creating things, hacking together technology and making new things. (if you’re not aware New Adventures is a superb conference on digital creativity, ethics, inclusion and other essential topics)

Digital is so powerful and teams that work in this industry end up with such a variety of skills that can be put to great use. Studio 24 is twenty years old this year and I hope to be able to make more time for creating things with my fantastic team. I’ve started already with a foray into building sites with Headless CMSs and some tools I hope we can spin out into a viable open source project in the near future (which I also hope will spark off some interesting talks I can do at user meetups & conferences).

I’m also trying to blog more these days. Blogging on your own site seems to be coming back into vogue, I think it’s simply a nice way to note down your thoughts to help inspire and motivate. As Field Notes neatly puts it: “I’m not writing it down to remember it later, I’m writing it down to remember it now.”