Blog

Yesterday, I wrote about the Requirements 101 presentation I gave to my team about what I believe makes for good solution requirements.

(I was not able to limit myself to the 15 minutes I devoted to this as an agenda item because I like the sound of my own voice way too much, but thatā€™s beside the point right now).

The important thing is that I generated some great discussion, which is exactly what I was hoping for. This was not intended to be a lecture, especially given that there are people in my group who are far better at this stuff than I am. The slide above prompted some great input.

Iā€™d argued that the requirement ā€œthe monthly transaction report must be available on the next business day after the end of the calendar monthā€ was a bad one, but I was intentionally tricking people. On the face of it thereā€™s nothing wrong with this, and it was a trick because after soliciting feedback on the requirement and getting everybodyā€™s input I only then let them know that the report in question takes 30 hours to generate and therefore, I argued, the requirement was not achievable. I said that issues could have been avoided by having the right people (probably technical SMEs of some description) at the table during the requirements phase of work.

Some people pushed back and said that if this really was a requirement of the hypothetical project to which itā€™s attached then work would simply have to be undertaken to reduce the time taken to generate the report. If, on the other hand, the project didnā€™t have the time and/or budget to support this work then that would be a separate issue to a certain extent, and there would be courses of action other than removing the requirement that could be pursued ā€“ but that this doesnā€™t make the requirement any less valid. People argued that itā€™s hard (if not impossible) to know that you need some additional technical input at this stage of the process without the benefit of hindsight. ā€œYou donā€™t know what you donā€™t know,ā€ as one person succinctly summed up.

These are excellent points. In fact, often when Iā€™m helping people document their initial requirements for a project I like to tell them (with my tongue firmly in my cheek) that anything is achievable, it will merely come down to how much time and money they have.

My point in including the example in my slide-deck is that I do believe there are opportunities to validate things like this before requirements are finalized and signed-off by stakeholders. If we are able to take advantage of these opportunities to move conversations like this one forwards in the project timeline then it will avoid back-and-forth between business and technical teams, avoid costly rework, and avoid nasty surprises further down the line.

I still feel thatā€™s all true and that my point is a valid one, but of course letā€™s be realistic ā€“ how much effort do we really want to spend validating that each individual requirement is achievable considering every known and as-yet-unknown constraint (bearing in mind that we havenā€™t even moved in to the ā€œexecutionā€ phase of work at this point in our story and the solution hasnā€™tĀ been designed)? Should we really wait to secure the availability of a highly-sought technical resource to sit in meetings where they will only have minimal input to provide?Ā Wouldnā€™t it be more efficient to get the stamp of approval in our requirements as-is and move forwards, allowing the solution architects to identify issues like this later (and suggest where compromises or alternative approaches may be necessary or beneficial)?

I suspect the answer ā€“ as is so often the case with the questions I pose on this blog ā€“ is all about finding an appropriate balance, but I donā€™t have any solid guidance here for you all.

What are your thoughts?

Blog

Yesterday, I wrote about the Requirements 101 presentation I gave to my team about what I believe makes for good solution requirements.

(I was not able to limit myself to the 15 minutes I devoted to this as an agenda item because I like the sound of my own voice way too much, but thatā€™s beside the point right now).

The important thing is that I generated some great discussion, which is exactly what I was hoping for. This was not intended to be a lecture, especially given that there are people in my group who are far better at this stuff than I am. The slide above prompted some great input.

Iā€™d argued that the requirement ā€œthe monthly transaction report must be available on the next business day after the end of the calendar monthā€ was a bad one, but I was intentionally tricking people. On the face of it thereā€™s nothing wrong with this, and it was a trick because after soliciting feedback on the requirement and getting everybodyā€™s input I only then let them know that the report in question takes 30 hours to generate and therefore, I argued, the requirement was not achievable. I said that issues could have been avoided by having the right people (probably technical SMEs of some description) at the table during the requirements phase of work.

Some people pushed back and said that if this really was a requirement of the hypothetical project to which itā€™s attached then work would simply have to be undertaken to reduce the time taken to generate the report. If, on the other hand, the project didnā€™t have the time and/or budget to support this work then that would be a separate issue to a certain extent, and there would be courses of action other than removing the requirement that could be pursued ā€“ but that this doesnā€™t make the requirement any less valid. People argued that itā€™s hard (if not impossible) to know that you need some additional technical input at this stage of the process without the benefit of hindsight. ā€œYou donā€™t know what you donā€™t know,ā€ as one person succinctly summed up.

These are excellent points. In fact, often when Iā€™m helping people document their initial requirements for a project I like to tell them (with my tongue firmly in my cheek) that anything is achievable, it will merely come down to how much time and money they have.

My point in including the example in my slide-deck is that I do believe there are opportunities to validate things like this before requirements are finalized and signed-off by stakeholders. If we are able to take advantage of these opportunities to move conversations like this one forwards in the project timeline then it will avoid back-and-forth between business and technical teams, avoid costly rework, and avoid nasty surprises further down the line.

I still feel thatā€™s all true and that my point is a valid one, but of course letā€™s be realistic ā€“ how much effort do we really want to spend validating that each individual requirement is achievable considering every known and as-yet-unknown constraint (bearing in mind that we havenā€™t even moved in to the ā€œexecutionā€ phase of work at this point in our story and the solution hasnā€™tĀ been designed)? Should we really wait to secure the availability of a highly-sought technical resource to sit in meetings where they will only have minimal input to provide?Ā Wouldnā€™t it be more efficient to get the stamp of approval in our requirements as-is and move forwards, allowing the solution architects to identify issues like this later (and suggest where compromises or alternative approaches may be necessary or beneficial)?

I suspect the answer ā€“ as is so often the case with the questions I pose on this blog ā€“ is all about finding an appropriate balance, but I donā€™t have any solid guidance here for you all.

What are your thoughts?

Blog

Requirements 101

Every two weeks my team (by which I mean my peers as defined by the org-chart, rather than the team from a particular project I may be working on) has a team meeting.

We talk about what we’re each working on and what we have coming up, we take some time to celebrate our accomplishments, discuss any issues or barriers… you know the kind of thing.

Anyway, we take turns to host and facilitate the meeting, and today it’s my turn. Part of the expectation in hosting is that I introduce an agenda item of my choosing that the cross-functional group (consisting of operations and projects people) might find interesting or beneficial.

I decided to talk about what makes for good requirements. In preparation for this I read a research paper that tells me that 68% of companies have created an environment where project success is “improbable” due to poor business analysis capability.

Requirements are important, then. Who knew?

It was tough to limit myself to a mere 15 minutes on this topic because of course I could talk for hours, but I managed somehow (or rather – since I’m actually writing this post in advance of the meeting – I’m sure I probably will).

Since sharing is caring, I’ve embedded my slide-deck below. It works just like PowerPoint (click anywhere inside the image to move forwards).

https://onedrive.live.com/fullscreen?cid=47a99becbf28ab8d&id=documents&resid=47A99BECBF28AB8D%21149&filename=Requirements%20101.pptx&authkey=!&wx=p&wv=s&wc=officeapps.live.com&wy=y&wdModeSwitchTime=1420671946681

Sadly you won’t get the benefit of listening to my insightful commentary, but if you want to click through to the file on OneDrive you will at least be able to view my typed speaking notes on each slide (using the link in the bottom-right of the screen) if you wish.

Blog

Whole Home Audio

Recently, I have become enamoured by the idea of whole home audio. And when I say enamoured, letā€™s be clear what Iā€™m actually talking about: I mean I’ve become obsessed by it.

Hereā€™s the thing though: Iā€™m not obsessed with the functionality. It would be nice to be able to play some music and have it come out of all the speakers in the house (without running cables all over the place), but what Iā€™m actually obsessed with is the price of systems that can do this.

image

The most popular system for whole home audio (or at least the one against which others are apparently measured) is Sonos. In my ideal setup I would want to be able to play music in the living room, kitchen, office and bedroom. To get the Sonos hardware necessary to build the system Iā€™d want (including surround-sound capable hardware in the living room for when we watch TV) Iā€™d have to spend about $3,000, and Iā€™d have to throw out all our existing audio equipment for not being compatible.

Itā€™s simply not worth it, in my opinion.

There are alternatives to the Sonos system available, of course, but this kind of system seems to be primarily the domain of high-end audio manufactures whose wares are priced beyond what I would consider sensible given my needs.

So, Iā€™ve decided not to buy a whole home audio system at all. Iā€™ve decided to try and build my own.

Enter the Raspberry Pi.

image

If youā€™re not familiar, the Raspberry Pi is a credit-card sized computer that costs about $40. It has a 700MHz ARM processor and 512mb RAM. Itā€™s not powerful, then, but it runs a Linux distribution and is designed to be a platform for electronics projects.

To start off with Iā€™m going to buy three of them ā€“ one for sending audio and two to receive it. Iā€™ve read about people who have managed to get a setup similar to the one I want working, and Iā€™ve read about many, many more people who came across challenges they found insurmountable and gave up. The nice thing about the Pi is that I can think of a million uses for it, so if I end up falling into the latter category at least I wonā€™t feel like I wasted my money.

Thereā€™s some Linux software available called Pulse Audio, which will be the basis of my project. Iā€™ve heard of Pulse Audio before because itā€™s the audio subsystem you find in many Linux distributions. What I didnā€™t know until recently though is that itā€™s capable of taking the audio that would normally be output to the computerā€™s speakers and sending it instead to another computer (or several other computers) on the network to be output there.

From what Iā€™ve read online the people that have managed to get this working have done so on a wired network, and theyā€™ve found that if they try and switch to a wireless setup then their router is flooded with audio traffic and canā€™t cope. This is a problem for me because wireless is one of my requirements, so my plan is to use a dedicated WiFi network for the audio: the sender will have two USB WiFi dongles attached, one to connect to the existing wireless network in my home for access to my music library and the internet, and a second to connect to an ad-hoc network used to communicate with the two receivers.

If Iā€™m able to get this setup working then I have some plans for extending the system and building additional functionality. Watch this space!

Blog

It’s December 18th, and Christmas is a mere week away.Ā I am rushing around like a crazy person attempting to get things done before the holiday, both in work and out of it.

For my work stuff it’s a bit of an arbitrary deadline. Far more important is the end of our fiscal year on March 31st, but nevertheless the end of the calendar year offers up a good opportunity to review and make sure everything I’m doing is in good shape for the final quarter.

The dawn of a new year feels like a good point at which to take stock in this way because it is traditionally a time for taking stock, reviewing and re-evaluating priorities: it’s new year’s resolution time.

Since this will likely be my last post of the year before I take a break to celebrate the season with my family, I thought I’d take a few moments to share a couple of mine.

“Never Put Off Til Tomorrow What You Can Do Today”

The quote above is attributed to a couple of different people (most often Thomas Jefferson) but as best as I can tell it has its origins in a Bulgarian proverb. Regardless, this is the first of my New Year’s resolutions.

This is easier said than done, and I feel as though I should qualify what it means to me: I am a stickler for planning. I’d suggest that most project managers are – it’s a big part of the job. Every morning I take 15 minutes to look at my calendar and my workload and I plan out my day. The plan only really exists in my head (if there’s lots happening I write it down, but that’s the exception rather than the rule). None of this is a problem, except that I always seem to get too personally invested in my plan. If something comes up… too bad!

Not all of my days are jam packed with meetings – in fact I work pretty hard to keep my schedule as flexible as possible and include space to accommodate shifting priorities and last-minute asks. But when it comes down to a daily plan, I may have planned to use some of that space to go and get coffee or watch the news on TV and here’s the thing – the world could implode at the office, and I will still go for coffee if that’s what I’d planned to do.

Things often come up that would take less than 15 minutes of my time – an email that requires a response, an ask for assistance. I often find myself rigidly sticking to my plan and deferring them to the following day (or week, or month) even though I could easily find time to get them done and off my plate immediately. I’m thinking tasks that small shouldn’t need to be planned for.

Planning is important though and not letting your workdays be dictated by the whirlwind of noise that’s out there is important too. Essentially what I’m saying, then, is that there’s a balance to be struck here. I don’t believe I’ve found it yet, but I plan to work on getting better at it over the next year.

Taking Steps

My second resolution is more of a personal one. Regular readers will know that I recently bought a smartwatch. One of its features is a step counter and activity tracker, and having this on my wrist has been enlightening to say the least.

I go into the office two or three days a week, and when I do I take thousands of steps as I move from one meeting to another, go and check in with people on the other side of the building, go for lunch with my team, whatever the case may be.

The other two or three days a week I work from home, and, I now know, basically sit myself at my desk as soon as I’m showered and dressed and then remain almost entirely stationary until the early evening (at which point I move to the sofa and remain stationary in front of the TV until bedtime).

My second resolution, then, is to be more active on those days. Take my laptop and go work from the coffee shop down the street for half an hour, spend my thinking time walking around the block instead of reclining in my chair, it doesn’t matter. Movement will be a part of my daily plan, and I’ll stick to it rigidly. Your email that requires 15 minutes of my time will just have to wait for another day.

Oh, wait…

Blog

It’s December 18th, and Christmas is a mere week away.Ā I am rushing around like a crazy person attempting to get things done before the holiday, both in work and out of it.

For my work stuff it’s a bit of an arbitrary deadline. Far more important is the end of our fiscal year on March 31st, but nevertheless the end of the calendar year offers up a good opportunity to review and make sure everything I’m doing is in good shape for the final quarter.

The dawn of a new year feels like a good point at which to take stock in this way because it is traditionally a time for taking stock, reviewing and re-evaluating priorities: it’s new year’s resolution time.

Since this will likely be my last post of the year before I take a break to celebrate the season with my family, I thought I’d take a few moments to share a couple of mine.

“Never Put Off Til Tomorrow What You Can Do Today”

The quote above is attributed to a couple of different people (most often Thomas Jefferson) but as best as I can tell it has its origins in a Bulgarian proverb. Regardless, this is the first of my New Year’s resolutions.

This is easier said than done, and I feel as though I should qualify what it means to me: I am a stickler for planning. I’d suggest that most project managers are – it’s a big part of the job. Every morning I take 15 minutes to look at my calendar and my workload and I plan out my day. The plan only really exists in my head (if there’s lots happening I write it down, but that’s the exception rather than the rule). None of this is a problem, except that I always seem to get too personally invested in my plan. If something comes up… too bad!

Not all of my days are jam packed with meetings – in fact I work pretty hard to keep my schedule as flexible as possible and include space to accommodate shifting priorities and last-minute asks. But when it comes down to a daily plan, I may have planned to use some of that space to go and get coffee or watch the news on TV and here’s the thing – the world could implode at the office, and I will still go for coffee if that’s what I’d planned to do.

Things often come up that would take less than 15 minutes of my time – an email that requires a response, an ask for assistance. I often find myself rigidly sticking to my plan and deferring them to the following day (or week, or month) even though I could easily find time to get them done and off my plate immediately. I’m thinking tasks that small shouldn’t need to be planned for.

Planning is important though and not letting your workdays be dictated by the whirlwind of noise that’s out there is important too. Essentially what I’m saying, then, is that there’s a balance to be struck here. I don’t believe I’ve found it yet, but I plan to work on getting better at it over the next year.

Taking Steps

My second resolution is more of a personal one. Regular readers will know that I recently bought a smartwatch. One of its features is a step counter and activity tracker, and having this on my wrist has been enlightening to say the least.

I go into the office two or three days a week, and when I do I take thousands of steps as I move from one meeting to another, go and check in with people on the other side of the building, go for lunch with my team, whatever the case may be.

The other two or three days a week I work from home, and, I now know, basically sit myself at my desk as soon as I’m showered and dressed and then remain almost entirely stationary until the early evening (at which point I move to the sofa and remain stationary in front of the TV until bedtime).

My second resolution, then, is to be more active on those days. Take my laptop and go work from the coffee shop down the street for half an hour, spend my thinking time walking around the block instead of reclining in my chair, it doesn’t matter. Movement will be a part of my daily plan, and I’ll stick to it rigidly. Your email that requires 15 minutes of my time will just have to wait for another day.

Oh, wait…

Blog

Cloud Backup, Redux

Back in April I wrote a post called Overhauling my Digital Life, in which (amongst other things) I wrote about signing up for a cloud backup service.

At the time I picked ADrive as our storage provider for a couple of reasons ā€“ the price is extremely reasonable ($25 a year for 100gb) and the fact that they support rsync, which makes it extremely easy to write a backup script or two and have the server run them periodically.

This week as I was taking a look at the logs from my backup script I noticed something alarming: Iā€™d used up all of my 100gb quota and my backup jobs were failing as a result.

I thought for a while about what I should do about this. ADriveā€™s next account level up offers 250gb storage ā€“ 2.5x as much ā€“ but is also 2.5x the price at $62.50 a year. If you survey the cloud backup marketplace as I did eight months ago youā€™ll find this to be an extremely reasonable price, but it doesnā€™t feel like good value to me for a couple of reasons. For one, I would prefer to see reduction in the per-GB cost if Iā€™m going to move up to a larger account and on that basis thereā€™s no difference to what I pay now for my 100gb plan, but also because thatā€™s much more storage space than I actually need. Buying an extra 150gb of space to store the one or two extra gigabytes that donā€™t fit in my 100gb plan just doesnā€™t feel sensible.

image

When I looked at ADrive in the first place one of the alternatives I considered was Amazonā€™s AWS. If youā€™re not familiar, Amazon sell services like storage and cloud computing power and they have some pretty big customers ā€“ theyā€™re the service that powers Netflix and Instagram, amongst others. The reason I didnā€™t choose Amazon in the first place is that they really arenā€™t a consumer-focused service and you need to have a much higher degree of tech-savvy to be able to use them. Theyā€™re also a little more expensive than ADrive for the storage volume I need (3ā‚” per GB per month comes out to $36 a year for my 100gb backup), but their pricing model places no upper limit on the amount of storage you could use and you pay only for what you do use. Perfect.

They also offer an option called Glacier which on the face of it seems perfect for what I want ā€“ itā€™s a third of the regular price and itā€™s designed explicitly to be backup storage: if you need to restore files then you may have a couple of hours of waiting before they can be made available. That would be fine, except I do incremental backups ā€“ each week, month or quarter (depending on whatā€™s being backed up) I synchronize the backup with whatā€™s on my server, sending only files that are newly created or changed. In order to do that my backup tool needs access to whatā€™s already in the backup so that it knows what it needs to send. Glacier was a non-starter for this reason.

Regardless, Iā€™d all but decided to give AWS a try and Iā€™d signed up for an account and created a storage ā€œbucket.ā€ I was reading online about tools that offer rsync-like functionality but can upload to AWS storage. Iā€™d found one that looked good, and had noticed that it supported a variety of storage providers in addition to AWS. One of the other providers supported was another cloud services company that you might have heard of: Google.

image

I use Googleā€™s consumer services pretty heavily (I have an Android phone and tablet, so it makes sense to), and in fact Iā€™d used Googleā€™s App Engine service once before for a previous project, but Iā€™d never really realized that App Engine is part of a wider Google Cloud Platform offering that includes a cloud storage service very similar to Amazonā€™s S3, but costs just 2ā‚” per GB per month. This makes it cheaper than the service I was already getting from ADrive (by a whole dollar a year). I was sold, and I signed up.

The next step was to convert all my rsync-based backup scripts to send data to Google instead of ADrive, and this was extremely easy. Google offers a command-line utility called gsutil which can be for a variety of functions, including incremental, rsync-style file copying. The whole thing (from signup to having the scripts done) took just a couple of hours (including the time it took me to find and read the documentation). The documentation was absolutely necessary here: in contrast to the intuitive ease of use Iā€™ve come to expect from Googleā€™s consumer services everything I did to set up my cloud platform storage seemed foreign and complicated. You really do need a decent amount of technical knowledge to understand it. Thatā€™s not a comment against Google, necessarily: I would assume AWS is much the same. It feels complicated because it is complicated. Cloud Platform is a set of tools for developers to use however they see fit, not a single-task solution for consumers like I was used to.

Anyway, everything was set up, and I was happyā€¦ except for one thing. I still had to get the 100gb or so of data from my home server to Googleā€™s. My backup scripts were done and would take care of that for me when they were next run, except I knew it was going to take a very long time for them to start from a blank slate. When I originally set up my ADrive storage Iā€™m pretty sure it took several weeks to run the initial backup, and Iā€™d had to run them only at night because sending that much data used up all our available bandwidth.

Really what I wanted was a method for importing data from ADrive to Google. If I could do that then I wouldnā€™t have to send 100gb of data from our home server at all, I could just move things across and then my backup scripts would take care of any changes since the last successful ADrive backup. Thereā€™s no such service, but wait! Google Cloud Platform is for developers to create their own services, why not build what I needed?

When Iā€™d signed up for Google Cloud Platform theyā€™d given me $300 credit with a 60-day expiry, intended, I guess, to help me play around and get my app off the ground. Iā€™d dismissed it ā€“ the only service I needed was cloud storage, and to chew through the $300 before it expired Iā€™d have to store 7.5tb of data. But the credit allowed me to explore the other Cloud Platform offerings and more or less use whatever I wanted for free during those initial 60 days. In a couple of clicks Iā€™d provisioned and started a linux VM on Googleā€™s infrastructure and was at the command prompt. I wrote a two-line script to download my backup from ADrive (using rsync) to the VM, then re-upload it to Cloud Storage (using gsutil). Our home internet connection would probably max out at about 300kb/s upload ā€“ less with ADrive where their infrastructure also seems to be something of a limiting factor. Downloading my data from ADrive to Googleā€™s VM is not super-speedy either at an average of about 1mb/s, but re-uploading it to my Cloud Storage bucket races along a pretty staggering 12mb/s and, most importantly, all this happens without clogging up my home internet connection in any way.

The VM is running and doing its thing as I write this. I expect it to finish in about 10 hours time, at which point Iā€™ll run the backup scripts on my home server to upload anything that was missing from ADrive and weā€™ll be done.

Blog

ŠŠµ Š¾ŃŃ‚Š°Š²ŃŠ¹ Š“Š½ŠµŃˆŠ½Š°Ń‚Š° рŠ°Š±Š¾Ń‚Š° Š·Š° утрŠµ

Bulgarian Proverb

Translation: “Don’t leave for tomorrow what you can do today.”

Blog

The Data Visualisation Catalogue

Last week I was writing a recommendation document relating to reporting within our business unit at work. I’d done some data gathering about the current-state situation and I had a wealth of information about a large number of reports that are produced and sent out.

I needed a visual way to summarize my some of my findings.

image

Data Visualisation Catalogue to the rescue!

I loved this site immediately, and you just might too. You tell it what you’re trying to show, and it suggests appropriate tools.

The site is already very useful, and it’s still growing. If you want to interact with the author then twitter is your friend.

Blog

SPServices SharePoint Attachments in Internet Explorer 9

A little over eight months ago I wrote a very brief post about using SPServices to add attachments to a SharePoint list. Full credit here goes to Brendan Wilbore who wrote the blog post that I linked to.

There was a problem, though ā€“ the solution relies on the fileReader JavaScript feature which requires Internet Explorer 10, and the default browser deployed within my organization is Internet Explorer 9. What we need is a fileReader alternative for older browsers. Thankfully, such a thing exists. Today Iā€™m going to post some example code that uses the fileReader polyfill and works in older browsers.

What You Need

The code has several pre-requisites. Youā€™ll need jQuery, jQuery UI, SPServices, SWFObject and the JavaScript and flash file that form the fileReader polyfill.

For the purposes of my demo I created a simple SharePoint list called ā€œFile Attachment Test.ā€ The list has a single field ā€“ title ā€“ and attachments to the list are enabled. Your list is probably named differently, so youā€™ll need to change the references in the code to reflect your list name.

The Code

<html>
<head>
   <meta charset="utf-8" />
   <title>File Attachment Test</title>
   http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js
   http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.4/jquery-ui.min.js
   http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js
   http://js/jquery.FileReader.min.js
   http://js/jquery.SPServices-2013.01.min.js
   
      var selectedfile = false;

      $(document).ready(function() {
         $('input#itemfile').fileReader({filereader: 'js/filereader.swf'});

         $('input#itemfile').change(function(e) {
            selectedfile = e.target.files[0];

            $('span#filename').html(selectedfile.name);
            $('span#fileinput').hide();
         });

         $('input#createitem').click(function() {
            $().SPServices({
               operation: 'UpdateListItems',
               async: false,
               listName: 'File Attachment Test',
               batchCmd: 'New',
               webURL: '/demo',
               valuepairs: [
                  ['Title', $('input#itemtitle').val()]
               ],
               completefunc: function(xData, Status) {
                  if (Status == 'success' && $(xData.responseXML).find('ErrorCode').text() == '0x00000000') {
                     currentitem = $(xData.responseXML).SPFilterNode("z:row").attr("ows_ID");
                     alert('List item created with ID ' + currentitem);

                     if (selectedfile) {
                        filereader = new FileReader();
                        filereader.filename = selectedfile.name;

                        filereader.onload = function() {
                           data = filereader.result;
                           n = data.indexOf(';base64,') + 8;
                           data = data.substring(n);

                           $().SPServices({
                              operation: 'AddAttachment',
                              async: false,
                              listName: 'File Attachment Test',
                              listItemID: currentitem,
                              fileName: selectedfile.name,
                              attachment: data,
                              completefunc: function(xData, Status) {
                                 alert('File uploaded');
                              }
                           });
                        };

                        filereader.onabort = function() {
                           alert('Upload aborted');
                        };

                        filereader.onerror = function() {
                           alert('Upload error');
                        };

                        filereader.readAsDataURL(selectedfile);
                     }
                  } else alert('List item creation failed');
               }
            })
         });
      });
   
</head>
<body>
   <p>Title:<br><input type="text" id="itemtitle"></p>
   <p>File:<br><span id="fileinput"><input type="file" id="itemfile"></span><span id="filename"></span></p>
   <p><input type="button" id="createitem" value="Go!"></p>
</body>
</html>

Notes

The fileReader polyfill takes the file input box and puts the flash file on top of it, so that the file selection and upload is handled by flash instead of natively in the browser. I found that this fell apart of the file input box didnā€™t remain in the same place on the page. In other words, I had problems if I tried to use jQueryā€™s .show() and .hide() functions (or similar).

I solved this by putting the file selection form in a pop-up window. If the page you place your form on is static (i.e. nothing changes after the DOM is loaded) then you shouldnā€™t have this problem.

Enjoy!