Blog

Using JavaScript to Identify Whether a Server Exists

Recently, for reasons Iā€™m sure Iā€™ll write about in the
future, I needed to find a way to use JavaScript to test if either of two
web-locations are accessible ā€“ my home intranet (which would mean the user is on
my network), or the corporate intranet of the company for which I work (which
would mean the user is on my organizationā€™s network). The page doing this test
is on the public web.

My solution for doing this test was simple. Since neither
resource is accessible publicly I put a small JavaScript file on each, then I
use AJAX and jQuery to try and fetch it. If
thatā€™s successful, I know the user has access to whichever intranet site served
the request and my page can react accordingly.

If neither request is successful I donā€™t have to do
anything, but the user doesnā€™t see any errors unless they choose to take a look
in the browser console.

This all worked wonderfully until I enabled SSL on the page
that needs to run these tests, then it immediately fell apart.

Both requests fail, because a page served over HTTPS is
blocked from asynchronously fetching content over an insecure connection. Which
makes sense, but really throws a spanner into the works for me: neither my home
nor corporate intranet sites are available outside the confines of their safe
networks, so neither support HTTPS.

My first attempt at getting around this was to simply change
the URL prefix for each from http:// to https:// and see what happened. Neither
site supports that protocol, but is the error that comes back different for a
site which exists but canā€™t respond, vs. a site which doesnā€™t exist? It appears
so!

Sadly, my joy at having solved the problem was extremely
short lived. The browser can tell the difference and reports as much in the
console, but JavaScript doesnā€™t have access to the error reported in the
console. As far as my code was concerned, both scenario was still identical
with a HTTP response code of 0 and the status description worryingly generic ā€œerror.ā€

We are getting closer to the solution I landed on, however.
The next thing I tried was specifying the port in the URL. I used the https://
prefix to avoid the ā€œmixed contentā€ error, but appended :80 after the hostname
to specify a port that the server was actually listening on.

This was what I was looking for. Neither server is capable
of responding to a HTTPS request on port 80, but the server that doesnā€™t exist
immediately returns an error (with a status code of 0 and the generic ā€œerrorā€
as the descriptive text), but the server that is accessible simply doesnā€™t
respond. Eventually the request times out with a status code of 0 but a status
description, crucially, of ā€œtimeout.ā€

From that, I built my imperfect but somewhat workable
solution. I fire a request off to each address, both of which are going to
fail. One fails immediately which indicates the server doesnā€™t exist, and the
other times-out (which I can check for in my JavaScript), indicating that the
server exists and I can react accordingly.

Itā€™s not a perfect solution. I set the timeout limit in my
code to five seconds, which means a ā€œsuccessfulā€ result canā€™t possibly come
back in less time than that. Iā€™d like to reduce that time, but when I
originally had it set at 2.5 seconds I was occasionally getting a
false-positive on my corporate network caused by, yā€™know, an actual timeout
from a request that took longer than that to return in an error state.

Nevertheless if you have a use-case like mine and you need
to test whether a server exists from the client perspective (i.e. the response
from doing the check server-side is irrelevant), I know of no other way. As for
me, Iā€™m still on the lookout for a more elegant design. Iā€™m next going to try
and figure out a reliable way to identify if the user is connected to my home
or corporate network based on their IP address. That way I can do a quick
server-side check and return an immediate result.

Itā€™s good to have this to fall back on, though, and for now
at least it appears to be working.

Blog

New Code Projects: Backblaze B2 Version Cleaner & VBA SharePoint List Library

Itā€™s been a while since Iā€™ve posted code of any description, but Iā€™ve been working on a couple of things recently that Iā€™m going to make publicly available on my GitLab page (and my mirror repository at code.jnf.me)

Backblaze B2 Version Cleaner

I wrote last week about transitioning my cloud backup to Backblazeā€™s B2 service, and I also mentioned a feature of it thatā€™s nice but also slightly problematic to me: it keeps an unlimited version history of all files.

Thatā€™s good, because it gives me the ability to go back in time should I ever need to, but over time the size of this version history will add up – and Iā€™m paying for that storage.

So, Iā€™ve written a script that will remove old versions once a newer version of the same file has reached a certain (configurable)Ā ā€œsafe age.ā€

For my purposes I use 30 days, so a month after Iā€™ve overwritten or deleted a file the old version is discarded. If I havenā€™t seen fit to roll back the clock before then my chance is gone.

Get the code here!

VBA SharePoint List Library

This one I created for work. Getting data from a SharePoint list into Excel is easy, but I needed to write Excel data to a list. I assumed thereā€™d be a VBA function that did this for me, but as it turns out I was mistaken – so I wrote one!

At the time of writing this is inĀ ā€œproof of conceptā€ stage. It works, but itā€™s too limited for primetime (it can only create new list items, not update existing ones, and each new item can only have a single field).

Out of necessity Iā€™ll be developing this one pretty quickly though, so check back regularly! Once itā€™s more complete Iā€™ll be opening it up to community contributions.

I have no plans to add functions that read from SharePoint to this library, but once I have the basic framework down that wouldnā€™t be too hard to add if youā€™re so inclined. Just make sure you contribute back!

Get the code here!

Blog

Raspberry Pi Whole Home Audio ā€“ The Conclusion?

Welcome to what is possibly the concluding post in my Raspberry Whole Home
Audio Project
series of postsā€¦ or possibly not.

At the start of this journey I had a plan to install mopidy on one of my Raspberry Pis and use pulse
audio
to stream the output to the others. Along the way I ran into some challenges
stemming from me buying the cheapest peripherals I could (and subsequently needing
to upgrade the WiFi adapters and power cables I first bought to better ones),
and my vision evolved as things progressed.

Instead of using mopidy, I switched to installing Kodi on each of the Pis thanks to the OpenElec linux distribution thatā€™s available for
several types of hardware, the Pi included.

image

Kodi, as a full-blown media centre system, might seem like a
bit of an odd choice for a headless device (i.e. something with no attached
display), but itā€™s the right choice for me for a couple of reasons.

  • I already have it installed on a couple of PCs
    in the house, attached to the TVs in the living room and the bedroom
  • I already have a remote
    control app
    for it on my phone
  • There are plugins for a bunch of stuff, such as this one for my favourite music streaming service. Well written
    plugins integrate perfectly with the system, and the remote control app.
  • It has built-in support for acting as an airplay
    receiver

For me, these things combine to provide me with the best of
both worlds. If I just want to play music from my library or from an internet
streaming service on one set of speakers, then I fire up the remote app and
target the particular device I want to output from.

If I want to play the same thing on several (or all) the
devices at the same time, then I fire up TuneBlade
on my laptop and any sounds that would usually come out of its speakers get
redirected to all the airplay receivers.

image

When it works, itā€™s glorious. Having the same music playing
in sync on all the speakers in the apartment is awesome.

The problem is that it doesnā€™t always work. TuneBlade
includes a setting that lets you set how much of a buffer you want. If you set
it too high the devices wonā€™t synchronize because it will take a slightly
different amount of time to fill the buffer on each of them. I have it set to
zero, which works amazingly well most of the time but leaves me especially
prone to blips in network connectivity and bandwidth. When these occur, things
get out of sync (which sounds terrible, because each set of speakers is not all
that far away from its neighbours), and it canā€™t seem to automatically recover ā€“
I have to manually disconnect and reconnect the affected player to get it back
in sync with its peers.

The bottom line then is that my setup is good, but not
perfect. Itā€™s no Sonos.

The search for a perfect system will likely continue, but
for the time being Iā€™m pretty content. I spent less than $100, and I have a
setup that would have cost me $5,000 from them.

Blog

The Golden Ratio: Design’s Biggest Myth

The other day I watched a Criminal Minds episode where the BAUĀ rescued some potential victims of a serial killer mathematician by using the golden ratio and the related Fibonacci sequence (or rather, by identifying and understanding the killerā€™s use of them).

It was an interesting episode. When I decided I wanted to read a little more about the golden ratio I found the article linked above, and that was an interesting read too.

Iā€™ve used the golden ratio in design (indeed, if youā€™re reading this by visiting my site on a large-screened device then the proportions of the left and right columns match the golden ratio).

Is it more aesthetically pleasing than different proportions would be? Thatā€™s the problem with things like this that are said to impact us at a subconscious level, my conscious mind doesnā€™t know.

The Golden Ratio: Design’s Biggest Myth

Blog

WebDAV Woes with Nginx, Sabre/Dav

Iā€™m in the process of moving my hosting to a new server,
because I wanted one that offers me more flexibility, and the ability to grow
the server and add resources to it during spikes in demand. Iā€™ve chosen to go
with Vultr (I recorded
a screencast
about six weeks ago showing how easy it is to set up a new
server on their platform). Iā€™ve also moved some non-essential hosting duties to
another provider altogether, CloudAtCost.

Anyway, this is not really my point.

One of the things on the server Iā€™m going to be decommissioning
is a private WebDAV store. I donā€™t use it for much, just moving the occasional
file between computers and ā€œpublishingā€ my work Outlook calendar so that I can subsequently
synchronize it back to my Google calendar and get notifications
on my wrist
. Itā€™s the WebDAV server that Iā€™ve been setting up this week.

Most of the stuff that Iā€™m moving to new servers is being
moved as-is: this is not an exercise in updating stuff, itā€™s about making sure
Iā€™m done with the old server by the time my lease on it expires, but there were
some things about the WebDAV share that I really wanted to update, so I took
the opportunity.

The main thing I wanted to achieve was to use my Windows
domain username and password
on the site. Most of my password-protected web
tools are already set up that way, but the WebDAV share was lagging behind.
Since this means I have to use ā€œbasicā€
authentication instead of the ā€œdigestā€ authentication
I previously had set
up this posed another problem. Windowsā€™ built-in WebDAV client doesnā€™t allow
basic authentication on unencrypted connections (because that means the
password is sent in the clear), so I had an SSL certificate issued. Then I
found out that the Windows WebDAV client doesnā€™t support server name
identification
, which meant some additional reconfiguration, and since I
was doing that I figured I may as well take the opportunity to update to the
latest version of sabre/dav, which is the
PHP-based WebDAV server I use (I find it much easier to set this up than to use
the built-in WebDAV functionality on web server software, which Iā€™ve never been
able to get working no matter which server software Iā€™m using).

I set all this up this week, tested it out by adding
it as a network location
on my personal and work laptops, and, once I was
satisfied it was all working well, pointed the domain name at the new server
and deleted the files from the old one.

Then I fired up Outlook, and hit the button to publish my
calendar.

It didnā€™t work.

It ended up creating a file with the right name, but a size
of zero bytes. A quick google search revealed there could be many reasons for this, and since Iā€™d
made the rookie mistake of changing everything
I really didnā€™t know where to start, not to mention that by this time Iā€™d
deleted the original files and so I couldnā€™t go backward. I tried everything,
with no success. I spent a good chunk of my day on Tuesday troubleshooting.

All along Iā€™d been convinced that the issue was with sabre/dav.
After all, all the other server functionality was working, so what other
explanation could there be for the one bit of it that sabre/dav was responsible
for being non-functional?

After a few hours though I was pretty sure that I had it set
up correctly, and I was convinced that Iā€™d either found a bug in sabre/dav or nginx. I checked the nginx logs.

2015/06/23 16:24:41 [error] 18736#0: *33 client intended to
send too large body: 1945486 bytes, client: 75.159.xxx.xxx, server: xxxxxx.jnf.me, request: "PUT /Calendars/Williams_Jason_Calendar.ics HTTP/1.1", host: "xxxxxx.jnf.me"

Dā€™oh.

All the files Iā€™d tested the share with were very small, but
my published calendar with 30 days history and 60 days of future events was
1.85mb. The server was configured to accept uploads with a maximum size of 1mb.

I added a single line to my nginx server configuration:

client_max_body_size 100m;

Done! Itā€™s so obvious when you know how.

Blog

Raspberry Pi Whole Home Audio Updates

Itā€™s been a long time since Iā€™ve written about my Raspberry Pi Whole Home Audio Project.

Simply, thatā€™s because Iā€™ve hit a bit of a wall and Iā€™m especially busy with work right now so I havenā€™t been able to find the time to work my way around it.

The problem is that the USB WiFi adapters that I bought (for about $5 each) donā€™t perform well. They have signal strength issues, and while they do work and maintain a network connection, the poor signal strength means the connection isnā€™t fast enough to stream audio. There are plenty of other people out there having the same problem. You get what you pay for, I guess, and I need to buy replacement adapters.

Iā€™m also considering a change in direction. My original plan was to install mopidy on one of the Pis and use pulse audio to stream the output to the others.

Iā€™m considering instead installing TuneBlade on one of my Windows PCs. TuneBlade takes all the audio output from that computer and streams it using Appleā€™s AirPlay protocol. Iā€™d then install ShairPort on all the Pis to turn them into AirPort receivers.

What do you guys think?

Blog

[youtube https://www.youtube.com/watch?v=rodL7zcINJo?feature=oembed&enablejsapi=1&origin=http://safe.txmblr.com&wmode=opaque&w=500&h=375]

Just a couple of days ago I wrote a little bit about how cloud servers are such a commodity item now, easily created and destroyed.

Today I wanted a server to test out a new tool, but I didnā€™t want to risk there being any impact to any of my existing production servers. So I created a new one on Vultr. From the time I started to the time I had a running server was just over a minute, and I recorded a screencast.

When I was done testing a couple of hours later I destroyed the server. Total cost to me for this exercise was about $0.02, or it would have been were it not for the fact that Vultr gave me a $5 account credit when I signed up.

Itā€™s hardly riveting viewing, but itā€™s nevertheless amazing in its own way.

Blog

Server Commoditization

Iā€™ve had a personal website of one description or another
for a long time now. For much of that time, the site was hosted by renting
space on someone elseā€™s large server ā€“ so called ā€œshared hosting.ā€

The theoretical problem with this model was that the
serverā€™s resources were shared between all its users, and if one user chewed
through a whole lot of them then that left fewer available for everyone else.
Iā€™m not sure I ever actually experienced this (although Iā€™m sure it really was
an issue for web hosting companies to contend with), but the problem I did come
across was that to protect against this kind of thing hosts often put policies
and configuration options in place that were very restrictive. Related to this
is the fact that server configuration options apply to everyone with space on
that server, and theyā€™re not for individual users to control. A problem if you
want to do anything that deviates even slightly from the common-case.

The alternative to shared webhosting would have been to rent
an entire server. This was ā€“ and still is ā€“ an expensive undertaking. It also
was ā€“ and still is ā€“ far more power than I need in order to host my website.
Sure, itā€™s possible to build a lower-powered (cheaper) server, but the act and
cost of putting it in a datacentre to open it up to wider world mean that itā€™s
probably not a worthwhile exercise to do all that with low-cost hardware.

What seems to me like not very long ago, virtualization
technology took off and created a market for virtual private servers (VPSs).
This allowed server owners to divide their hardware up between users, but in
contrast to shared hosting each user gets something thatā€™s functionally
indistinguishable from a real hardware computer. They can configure it however
they wish, and it comes with a guaranteed chunk of resources: heavy usage of
one of the virtual machines hosted on the server does not negatively impact the
performance of any of the others.

This is the model under which my website is currently
hosted. Iā€™ve chosen a low-powered VPS because thatā€™s all I need, but recently
as my site has started to see more traffic it occasionally sees spikes in
incoming traffic that tax its limited memory and processing resources. I use CloudFlare as a service to balance this
out, mitigate threats, easily implement end-user caching policies and generally
improve speeds (particularly for those users that a geographically far away
from the server), but once my server resources are maxed thereā€™s nothing I can
do about it: my host has divided the server up into VPSā€™s of a predefined size,
and doesnā€™t allow me to grow or shrink the server along with my needs.

The new paradigm is an evolution of this. Instead of
dividing each bare-metal server up into predefined VPS chunks, each server is a
pool of resources within which VPSs of various sizes are automatically
provisioned according to customer requirements. Behind the scenes, technology
has grown to make this easier, especially when you scale the story up to more
than one bare-metal server. A pool of physical servers can also pool resources.
If a VPS hosted on one physical server needs to grow beyond the remaining
available resources of its host, it can be invisibly moved to another host
while itā€™s still running and then its resources expanded.

This new paradigm is the one I plan to move to. Led by the
likes of Amazon and Google and now followed in the marketplace
by lower-cost providers like DigitalOcean
and Vultr (likely to be my
provider of choice), servers have really become commodity items that can be
created and destroyed at will. You used to rent servers/hosting by the month or
year, now itā€™s by the minute or hour. Itā€™s common for hosting companies to
provide an API that lets you automate the processes involved ā€“ if my server
detects that itā€™s seeing a lot of traffic and is running low on resources it
could ā€“ with the right script implemented ā€“ autonomously decide to grow itself,
or maybe spin up a sibling to carry half the load. When things settle down it
can shrink itself back down or destroy any additional servers it created.

What a wonderful world we live in!