Blog

WebDAV Woes with Nginx, Sabre/Dav

Iā€™m in the process of moving my hosting to a new server,
because I wanted one that offers me more flexibility, and the ability to grow
the server and add resources to it during spikes in demand. Iā€™ve chosen to go
with Vultr (I recorded
a screencast
about six weeks ago showing how easy it is to set up a new
server on their platform). Iā€™ve also moved some non-essential hosting duties to
another provider altogether, CloudAtCost.

Anyway, this is not really my point.

One of the things on the server Iā€™m going to be decommissioning
is a private WebDAV store. I donā€™t use it for much, just moving the occasional
file between computers and ā€œpublishingā€ my work Outlook calendar so that I can subsequently
synchronize it back to my Google calendar and get notifications
on my wrist
. Itā€™s the WebDAV server that Iā€™ve been setting up this week.

Most of the stuff that Iā€™m moving to new servers is being
moved as-is: this is not an exercise in updating stuff, itā€™s about making sure
Iā€™m done with the old server by the time my lease on it expires, but there were
some things about the WebDAV share that I really wanted to update, so I took
the opportunity.

The main thing I wanted to achieve was to use my Windows
domain username and password
on the site. Most of my password-protected web
tools are already set up that way, but the WebDAV share was lagging behind.
Since this means I have to use ā€œbasicā€
authentication instead of the ā€œdigestā€ authentication
I previously had set
up this posed another problem. Windowsā€™ built-in WebDAV client doesnā€™t allow
basic authentication on unencrypted connections (because that means the
password is sent in the clear), so I had an SSL certificate issued. Then I
found out that the Windows WebDAV client doesnā€™t support server name
identification
, which meant some additional reconfiguration, and since I
was doing that I figured I may as well take the opportunity to update to the
latest version of sabre/dav, which is the
PHP-based WebDAV server I use (I find it much easier to set this up than to use
the built-in WebDAV functionality on web server software, which Iā€™ve never been
able to get working no matter which server software Iā€™m using).

I set all this up this week, tested it out by adding
it as a network location
on my personal and work laptops, and, once I was
satisfied it was all working well, pointed the domain name at the new server
and deleted the files from the old one.

Then I fired up Outlook, and hit the button to publish my
calendar.

It didnā€™t work.

It ended up creating a file with the right name, but a size
of zero bytes. A quick google search revealed there could be many reasons for this, and since Iā€™d
made the rookie mistake of changing everything
I really didnā€™t know where to start, not to mention that by this time Iā€™d
deleted the original files and so I couldnā€™t go backward. I tried everything,
with no success. I spent a good chunk of my day on Tuesday troubleshooting.

All along Iā€™d been convinced that the issue was with sabre/dav.
After all, all the other server functionality was working, so what other
explanation could there be for the one bit of it that sabre/dav was responsible
for being non-functional?

After a few hours though I was pretty sure that I had it set
up correctly, and I was convinced that Iā€™d either found a bug in sabre/dav or nginx. I checked the nginx logs.

2015/06/23 16:24:41 [error] 18736#0: *33 client intended to
send too large body: 1945486 bytes, client: 75.159.xxx.xxx, server: xxxxxx.jnf.me, request: "PUT /Calendars/Williams_Jason_Calendar.ics HTTP/1.1", host: "xxxxxx.jnf.me"

Dā€™oh.

All the files Iā€™d tested the share with were very small, but
my published calendar with 30 days history and 60 days of future events was
1.85mb. The server was configured to accept uploads with a maximum size of 1mb.

I added a single line to my nginx server configuration:

client_max_body_size 100m;

Done! Itā€™s so obvious when you know how.

Blog

[youtube https://www.youtube.com/watch?v=rodL7zcINJo?feature=oembed&enablejsapi=1&origin=http://safe.txmblr.com&wmode=opaque&w=500&h=375]

Just a couple of days ago I wrote a little bit about how cloud servers are such a commodity item now, easily created and destroyed.

Today I wanted a server to test out a new tool, but I didnā€™t want to risk there being any impact to any of my existing production servers. So I created a new one on Vultr. From the time I started to the time I had a running server was just over a minute, and I recorded a screencast.

When I was done testing a couple of hours later I destroyed the server. Total cost to me for this exercise was about $0.02, or it would have been were it not for the fact that Vultr gave me a $5 account credit when I signed up.

Itā€™s hardly riveting viewing, but itā€™s nevertheless amazing in its own way.

Blog

Server Commoditization

Iā€™ve had a personal website of one description or another
for a long time now. For much of that time, the site was hosted by renting
space on someone elseā€™s large server ā€“ so called ā€œshared hosting.ā€

The theoretical problem with this model was that the
serverā€™s resources were shared between all its users, and if one user chewed
through a whole lot of them then that left fewer available for everyone else.
Iā€™m not sure I ever actually experienced this (although Iā€™m sure it really was
an issue for web hosting companies to contend with), but the problem I did come
across was that to protect against this kind of thing hosts often put policies
and configuration options in place that were very restrictive. Related to this
is the fact that server configuration options apply to everyone with space on
that server, and theyā€™re not for individual users to control. A problem if you
want to do anything that deviates even slightly from the common-case.

The alternative to shared webhosting would have been to rent
an entire server. This was ā€“ and still is ā€“ an expensive undertaking. It also
was ā€“ and still is ā€“ far more power than I need in order to host my website.
Sure, itā€™s possible to build a lower-powered (cheaper) server, but the act and
cost of putting it in a datacentre to open it up to wider world mean that itā€™s
probably not a worthwhile exercise to do all that with low-cost hardware.

What seems to me like not very long ago, virtualization
technology took off and created a market for virtual private servers (VPSs).
This allowed server owners to divide their hardware up between users, but in
contrast to shared hosting each user gets something thatā€™s functionally
indistinguishable from a real hardware computer. They can configure it however
they wish, and it comes with a guaranteed chunk of resources: heavy usage of
one of the virtual machines hosted on the server does not negatively impact the
performance of any of the others.

This is the model under which my website is currently
hosted. Iā€™ve chosen a low-powered VPS because thatā€™s all I need, but recently
as my site has started to see more traffic it occasionally sees spikes in
incoming traffic that tax its limited memory and processing resources. I use CloudFlare as a service to balance this
out, mitigate threats, easily implement end-user caching policies and generally
improve speeds (particularly for those users that a geographically far away
from the server), but once my server resources are maxed thereā€™s nothing I can
do about it: my host has divided the server up into VPSā€™s of a predefined size,
and doesnā€™t allow me to grow or shrink the server along with my needs.

The new paradigm is an evolution of this. Instead of
dividing each bare-metal server up into predefined VPS chunks, each server is a
pool of resources within which VPSs of various sizes are automatically
provisioned according to customer requirements. Behind the scenes, technology
has grown to make this easier, especially when you scale the story up to more
than one bare-metal server. A pool of physical servers can also pool resources.
If a VPS hosted on one physical server needs to grow beyond the remaining
available resources of its host, it can be invisibly moved to another host
while itā€™s still running and then its resources expanded.

This new paradigm is the one I plan to move to. Led by the
likes of Amazon and Google and now followed in the marketplace
by lower-cost providers like DigitalOcean
and Vultr (likely to be my
provider of choice), servers have really become commodity items that can be
created and destroyed at will. You used to rent servers/hosting by the month or
year, now itā€™s by the minute or hour. Itā€™s common for hosting companies to
provide an API that lets you automate the processes involved ā€“ if my server
detects that itā€™s seeing a lot of traffic and is running low on resources it
could ā€“ with the right script implemented ā€“ autonomously decide to grow itself,
or maybe spin up a sibling to carry half the load. When things settle down it
can shrink itself back down or destroy any additional servers it created.

What a wonderful world we live in!