Blog

SPServices SharePoint Attachments in Internet Explorer 9

A little over eight months ago I wrote a very brief post about using SPServices to add attachments to a SharePoint list. Full credit here goes to Brendan Wilbore who wrote the blog post that I linked to.

There was a problem, though ā€“ the solution relies on the fileReader JavaScript feature which requires Internet Explorer 10, and the default browser deployed within my organization is Internet Explorer 9. What we need is a fileReader alternative for older browsers. Thankfully, such a thing exists. Today Iā€™m going to post some example code that uses the fileReader polyfill and works in older browsers.

What You Need

The code has several pre-requisites. Youā€™ll need jQuery, jQuery UI, SPServices, SWFObject and the JavaScript and flash file that form the fileReader polyfill.

For the purposes of my demo I created a simple SharePoint list called ā€œFile Attachment Test.ā€ The list has a single field ā€“ title ā€“ and attachments to the list are enabled. Your list is probably named differently, so youā€™ll need to change the references in the code to reflect your list name.

The Code

<html>
<head>
   <meta charset="utf-8" />
   <title>File Attachment Test</title>
   http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js
   http://ajax.googleapis.com/ajax/libs/jqueryui/1.10.4/jquery-ui.min.js
   http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js
   http://js/jquery.FileReader.min.js
   http://js/jquery.SPServices-2013.01.min.js
   
      var selectedfile = false;

      $(document).ready(function() {
         $('input#itemfile').fileReader({filereader: 'js/filereader.swf'});

         $('input#itemfile').change(function(e) {
            selectedfile = e.target.files[0];

            $('span#filename').html(selectedfile.name);
            $('span#fileinput').hide();
         });

         $('input#createitem').click(function() {
            $().SPServices({
               operation: 'UpdateListItems',
               async: false,
               listName: 'File Attachment Test',
               batchCmd: 'New',
               webURL: '/demo',
               valuepairs: [
                  ['Title', $('input#itemtitle').val()]
               ],
               completefunc: function(xData, Status) {
                  if (Status == 'success' && $(xData.responseXML).find('ErrorCode').text() == '0x00000000') {
                     currentitem = $(xData.responseXML).SPFilterNode("z:row").attr("ows_ID");
                     alert('List item created with ID ' + currentitem);

                     if (selectedfile) {
                        filereader = new FileReader();
                        filereader.filename = selectedfile.name;

                        filereader.onload = function() {
                           data = filereader.result;
                           n = data.indexOf(';base64,') + 8;
                           data = data.substring(n);

                           $().SPServices({
                              operation: 'AddAttachment',
                              async: false,
                              listName: 'File Attachment Test',
                              listItemID: currentitem,
                              fileName: selectedfile.name,
                              attachment: data,
                              completefunc: function(xData, Status) {
                                 alert('File uploaded');
                              }
                           });
                        };

                        filereader.onabort = function() {
                           alert('Upload aborted');
                        };

                        filereader.onerror = function() {
                           alert('Upload error');
                        };

                        filereader.readAsDataURL(selectedfile);
                     }
                  } else alert('List item creation failed');
               }
            })
         });
      });
   
</head>
<body>
   <p>Title:<br><input type="text" id="itemtitle"></p>
   <p>File:<br><span id="fileinput"><input type="file" id="itemfile"></span><span id="filename"></span></p>
   <p><input type="button" id="createitem" value="Go!"></p>
</body>
</html>

Notes

The fileReader polyfill takes the file input box and puts the flash file on top of it, so that the file selection and upload is handled by flash instead of natively in the browser. I found that this fell apart of the file input box didnā€™t remain in the same place on the page. In other words, I had problems if I tried to use jQueryā€™s .show() and .hide() functions (or similar).

I solved this by putting the file selection form in a pop-up window. If the page you place your form on is static (i.e. nothing changes after the DOM is loaded) then you shouldnā€™t have this problem.

Enjoy!

Blog

CloudFlare Adds SSL To All Customers In Advance Of Googleā€™s Focus OnĀ Security

I’ve written recently about SSL and how you can enable it on your website without spending a lot of (or even any) money. I’m a big fan of Cloudflare and their free service offering, and this feature just makes it better still.

CloudFlare Adds SSL To All Customers In Advance Of Googleā€™s Focus OnĀ Security

Blog

Publicly Trusted SSL on the Cheap

Last week I wrote about how to create a self-signed SSL certificate for your website. It turned out to be one of my popular posts, and the process turned out to be remarkably easy: you run one single command, make a quick change to your webserver configuration and youā€™re done.

Our self-signed certificate worked great for encrypting the connection between our browser and the webserver, but as I mentioned thatā€™s only half the SSL story. Our certificate wasnā€™t trusted by our operating system, which means it couldnā€™t be used by our browser to confirm the identity of the server weā€™d connected to, which in turns means that visitors to our website are greeted with a big, bold ā€œyour connection is not secureā€ error message.

Our browser knows whether or not it can trust a given SSL certificate through a hierarchical structure. Iā€™m glossing over some details, but essentially our operating system comes with a listed of trusted ā€œrootā€ certificates. The owners of these root certificates can produce certificates for their customers much as I produced one for myself last week. The difference is that thereā€™s a mechanism for traceability here ā€“ the certificates they produce are trusted, because our browser can trace things back to the root certificate that it already knows to be good.

image

Iā€™m not suggesting thereā€™s some kind of conspiracy at play here, but it seems to me the owners of these root certificates have a metaphorical license to print money. They can create something out of nothing with, in essence, a single command, and sell it for a value they determine. I might be OK with that if they hadnā€™t determined that the value is so insanely high.

Luckily for us there are market forces at play in this whole story, and we donā€™t have to pony up the $1,500 Symantec are asking to secure our website traffic. Weā€™re going to do it for free. Read on!

SSL Certificates and Their Value

Unfortunately, budget-minded certificate providers are few and far between, and the trend appears to be that their either disappearing or eliminating their lowest-cost options in favour of ā€œbetter,ā€ higher-priced ones. NameCheap is a good option if youā€™re looking to minimize costs, with certificates starting at around $10 at the time of writing.

But hereā€™s the question: If you can get an SSL certificate from them for $10, why are Symantec charging $1,500. Is their option 150x better?

Hereā€™s my answer: No.

Symantec would likely argue that point though, as you might imagine. Theyā€™d mention that they put their customers through a more stringent identification process in order to provide an increased level of confidence in their product. They know their customers, and they know theyā€™re only issuing certificates to trustworthy sites. Theyā€™d argue they provide a warranty with their certificates that provides their customers with legal protection against a losses caused by a security breach.

This is all well and good, of course, but does the typical internet user care? Iā€™d propose that the average site visitor ā€“ at best ā€“ notices the green padlock icon in the address bar and proceeds with confidence upon seeing it. How much you, as a site owner, pay to get that padlock icon really makes no difference to the vast majority of your visitors.

That all being said, it of course depends on what your site does. If youā€™re a bank, this is not an area you should be trying to save money in. Take the expensive certificate with the warranty and the legal protection. If you run an e-commerce site and your livelihood depends on your website then maybe donā€™t spend $1,500, but donā€™t accept the reputational risk of using a product with no warranty and limited support. If youā€™re me, though? Whatever, just spend as little money as possible.

Getting a Free SSL Certificate

Enter our new best friends at StartSSL. They offer single-site SSL certificates for the extremely reasonable price of free. There are some caveats as you might expect, but none of them are a show-stopper for my purposes. Nevertheless the biggest thing you should consider is that although theyā€™ll issue the certificate for free, you will have to pay if you ever need to revoke it. If you ever suffer a security breach and suspect that your certificate file has fallen into the wrong hands (Iā€™m talking about the equivalent of the server.pem file we created for ourselves last week), it should be revoked to prevent some nefarious person setting up a site that masquerades as yours. If thereā€™s ever another vulnerability similar to the heartbleed bug then the certificate should likewise be revoked.

In a nutshell, this is a risk tolerance question. By taking the free certificate youā€™re betting that nothing bad will happen during the 12-month life of your certificate or that if it does youā€™ll be prepared to accept a whole host of new risks.

Since I was OK with the many drawbacks of using a self-signed certificate, I laugh in the face of risks like the ones mentioned above. If youā€™re different then do your homework and make sure youā€™re getting a product thatā€™s right for you, but if youā€™re like me then tune in next week when I walk through the steps of getting a free certificate issued to me and install it on my server.

image

Blog

Single Sign-On (SSO) in PHP

Thereā€™s a project underway in work called single sign-on and identity and access management. Iā€™m not involved in it directly, although by its nature it touches on several things that I am working on at the moment. The goal, as the name implies, is to rid ourselves entirely of multiple sets of credentials: anything we use should have the same login ID and password, whether itā€™s one of our hosted systems (which, to be fair, already behave this way for the most part) or a third-party system like the webapp that we use to deliver training.

Since Iā€™m not directly working on it this project is not really anything more than a blip on my radar, but itā€™s interesting to me because Iā€™m attempting to do a similar thing at home, albeit on an entirely different scale to the large enterprise-wide project thatā€™s I hear about in my professional life.

After the recent upgrade to my home server that Iā€™ve blogged about before I now have several virtual servers included in our home network setup. One of these runs Windows Server 2008 R2, and Iā€™ve made that one a domain controller that all the other computers (and servers) connect to. There are several benefits to this approach, but chief amongst them is a single set of credentials ā€“ I use the same username and password regardless of which of our home computers Iā€™m logging on to, and when I change my password I change it once for it to be effective everywhere.

There are few web services running on our home network which require signing into, such as a web interface for centralized torrent downloads, a file browser, and a simple content management system that pulls everything together into an intranet of sorts. Most of these are PHP-based, and Iā€™m on a mission to add SSO capability to these too.

Iā€™ve discovered two main methods of enabling SSO in PHP that Iā€™ll write about after the break, and my eventual plan is to tie the two methods together into a single cohesive sign-on module that I can reuse. Read on to find out what Iā€™m up to!

LDAP (Lightweight Directory Access Protocol) Authentication

Wikipedia defines LDAP as an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network.

Thatā€™s a lot of fancy words for saying that LDAP provides an address book (think of the global address listing you see in Outlook, and youā€™re thinking of an LDAP database). PHP has a set of LDAP extensions that can be used to query the address book and retrieve user information, but in the context of authentication, we donā€™t even need to worry about any of that. An LDAP server can (depending on the implementation) be queried anonymously, or we can pass in some credentials with the query to get more detailed information back (again, depending on the implementation).

Itā€™s this last part thatā€™s important. Active Directory on a Windows domain controller is an LDAP server. In PHP, all we have to do is attempt to log on to the LDAP server. If weā€™re successful, itā€™s because the username and password that we input is valid on the domain. Even better, ā€œvalid on the domainā€ in this case means itā€™s an active account, the password is not locked, and all other account-level restrictions (such as a restricted set of logon hours) are considered.

All of this makes using LDAP to test the authenticity of a set of supplied credentials pretty trivial:

<?php
   $username = "testuser"
   $password = "pa55w0rd";

   $domain = "testdomain.local";
   $domaincontroller = "dc1.testdomain.local";

   $ldap = ldap_connect($domaincontroller);
   if ($bind = ldap_bind($ldap, $username."@".$domain, $password)) {
      // user login successful
   } else {
      // user login failed
   }
?>

Thatā€™s all there is to it!

Depending on what you had in mind when you read ā€œSSOā€ in the title of this post though, we may not have met your requirements here. If we meant that the user has a single set of credentials then, fantastic ā€“ they do! But if our intention was to only require that a user enters their single set of credentials once (when they log on to Windows) then weā€™ve fallen short here. The code above requires the username and plaintext password, so weā€™d have to present some kind of web-based login form to the user to request that information and get all this to work.

Enter NT LAN Manager (NTLM) Authentication

If a website (or intranet site) is part of the intranet or trusted zones (found in the Internet Settings control panel applet) then that site is allowed to pass a header requesting NTLM authentication. When it does, windows passes a header back containing some information about the currently logged-in user without the user being prompted for their credentials in any way.

I obtained some simple example code from someone called loune at Siphon9.net and modified so that it doesnā€™t require apache as the webserver. Hereā€™s the PHP:

<?php
   if (!isset($_SERVER['HTTP_AUTHORIZATION'])){
      header('HTTP/1.1 401 Unauthorized');
      header('WWW-Authenticate: NTLM');
      exit;
   }

   $auth = $_SERVER['HTTP_AUTHORIZATION'];

   if (substr($auth,0,5) == 'NTLM ') {
      $msg = base64_decode(substr($auth, 5));
      if (substr($msg, 0, 8) != "NTLMSSPx00")
         die('error header not recognised');

      if ($msg[8] == "x01") {
         $msg2 = "NTLMSSPx00x02x00x00x00".
            "x00x00x00x00". // target name len/alloc
            "x00x00x00x00". // target name offset
            "x01x02x81x00". // flags
            "x00x00x00x00x00x00x00x00". // challenge
            "x00x00x00x00x00x00x00x00". // context
            "x00x00x00x00x00x00x00x00"; // target info len/alloc/offset
            
         header('HTTP/1.1 401 Unauthorized')
         header('WWW-Authenticate: NTLM '.trim(base64_encode($msg2)));
         exit;
      }

      else if ($msg[8] == "x03") {

         function get_msg_str($msg, $start, $unicode = true) {
            $len = (ord($msg[$start+1]) * 256) + ord($msg[$start]);
            $off = (ord($msg[$start+5]) * 256) + ord($msg[$start+4]);
            if ($unicode
               return str_replace("", '', substr($msg, $off, $len));
            else
               return substr($msg, $off, $len);
         }

         $ntlm_user = get_msg_str($msg, 36);
         $ntlm_domain = get_msg_str($msg, 28);
         $ntlm_workstation = get_msg_str($msg, 44);
      }
   }

   echo "You are $ntlm_user from $ntlm_domain/$ntlm_workstation";
?>

Thereā€™s a big problem with this code, and the problem is that itā€™s just decoding the user information from the HTTP header, and assuming that all is good ā€“ thereā€™s no work done to confirm that the header is genuine, and there is a possibility that it could have been faked. We could do some tricks like confirming that the page request is coming from within our local network, but that doesnā€™t really solve the problem ā€“ HTTP headers can be manually defined by an attacker that knows what theyā€™re doing, and what weā€™re doing here is a bit like asking for a username and then just trusting that the user is who they say they are without doing any further authentication.

Combining the Two Approaches

Included in the NTLM authorization header that gets sent to the webserver during the passwordless authentication interaction described above is an MD4 hash of the userā€™s password. A newer version of louneā€™s code retrieves this and confirms its validity using samba. Unfortunately that setup wonā€™t work for me ā€“ my intranet webserver is running a customized version of samba that comes with the software I use to manage the linux computers that are attached to my domain, and this trick just flat-out fails.

However, if I have a plaintext version of the userā€™s password then I can use PHP to generate an MD4 hash of it for the purposes of comparison. So hereā€™s my plan:

Scenario A: The first time a user comes to my webapp weā€™ll get their credentials using NTLM, including the MD4 hash of their password. Since we wonā€™t know if this hash is valid, weā€™ll present the user with a screen asking them to confirm their password (but not their username). When they input it, weā€™ll confirm that their username and password combo is good using LDAP, and also generate an MD4 hash of the plaintext password that they entered to compare with what NTLM gave us. If nothing weird is going on everything should match. At this point weā€™ll store the MD4 password hash for future.

Scenario B: When a user returns to our webapp weā€™ll get their credentials using NTLM as before, and compare the hash NTLM gave us to our stored hash from their previous visit. If they match, weā€™re good, and thereā€™s no need to ask the user to enter their password.

Scenario C: If the NTLM hash and the stored hash donā€™t match then the most likely scenario is that the user has changed their Windows password since their previous visit to our webapp. In that case weā€™ll throw out the stored hash and start again at Scenario A.

If anyone knows of a better approach (is there a centrify Kerberos tool that I could use to get an MD4 hash of the userā€™s password for the purposes of my comparison, for example?) then please let me know! Iā€™d love to be able to achieve true passwordless SSO, but so far I canā€™t a method for doing so unless I switch my webserver from linux to Windows, and I donā€™t want to do that.

Blog

Creating a Self-Signed SSL Certificate for Lighttpd

Youā€™ve probably heard of SSL, or at least know it from either the https:// prefix that you see when browsing certain websites or the padlock icon in your browserā€™s address bar that goes along with it.

image

You probably also know that this iconā€™s presence is an absolute must when youā€™re doing sensitive things on the internet, like online banking. Really though you should consider it a must on any site where youā€™re entering information that you wouldnā€™t want falling into the wrong hands ā€“ including your username and password for the site itself and anything you do on the site once you have logged in.

SSL does two important things: It encrypts the connection between your browser and the siteā€™s webserver, meaning that if somebody had the ability to listen in to your internet traffic (which is actually frighteningly easy, especially if you’re using a public WiFi hotspot) then they wonā€™t actually see any of your important personal details. SSL also provides identity verification for websites in order to thwart a more complex attack where your web traffic is somehow redirected to a fake version of the site. Today weā€™re going to tackle only the first part ā€“ encrypting the connection between the browser and my webserver, which is running lighttpd.

Recently Iā€™ve created a web interface that allows me access to my documents from anywhere on the web. To log in I have to enter my user ID and password for my home network, and once Iā€™m logged in I may want to open a file that includes some sensitive information. This whole interaction is something that should be protected end to end by SSL, so thatā€™s precisely what Iā€™m going to do.

Creating an SSL Certificate

Of the two things SSL can do for us (securing a connection and confirming the identity of the webserver weā€™re connected to), the first part is actually much easier than you might think. The problem (as weā€™ll discover), is that doing only that first part has some problems that make it unsuitable for a typical public website. More on that later, but in my scenario where I have a website thatā€™s intended only for my use this will be an acceptable solution, and thatā€™s what weā€™re going to do.

On the webserver, navigate to a directory where youā€™re going to store the SSL certificate file. This directory should not be web-accessible. Weā€™re going to use OpenSSL to create our SSL certificate. OpenSSL is unfortunately best known for introducing the heartbleed bug that caused a panic in the not too distant past, so before you proceed make sure the version you have is not affected. The step we’re about to complete actually won’t be impacted even if your server is vulnerable to heartbleed, but the day to day use of any certificate on a vulnerable server is not safe.

Ready? Good. Type the following command:

openssl req -new -x509 -keyout server.pem -out server.pem -days 365 ā€“nodes

OpenSSL will ask a few questions, the answers of which will form a part of the certificate weā€™re generating (and be visible to site visitors, if they choose to go looking for it). Everything is fairly self-explanatory with the possible exception of the Common Name field. Since weā€™re going to be using this certificate for web-based SSL, the Common Name must be the hostname (the full domain name, including the www prefix if you use it) of your website.

Country Name (2 letter code) [AU]:CA
State or Province Name (full name) [Some-State]:Alberta
Locality Name (eg, city) []:Calgary
Organization Name (eg, company) [Internet Widgits Pty Ltd]:JnF.me
Organizational Unit Name (eg, section) []:Hosting and web services
Common Name (eg, YOUR name) []:www.ssl-example.jnf.me
Email Address []:[email protected]

Youā€™ll find that you now have a file called server.pem in your working folder, and thatā€™s it! This is your SSL certificate that will be used to secure the connection.

Enabling SSL in Lighttpd

Now we need to configure our webserver to use SSL with the certificate weā€™ve just generated. As I noted, my webserver is lighttpd. If youā€™re using Apache, IIS, Nginx or something else then the steps you need to follow will be different.

For lighttpd, open up your lighttpd.conf file (typically found in /etc/lighttpd) and adjust your configuration similar to the following:

$SERVER["socket"] == ":80" {
   url.redirect = ("^/(.*)" => "https://www.ssl-example.jnf.me/$1")Ā 
}

$SERVER["socket"] == ":443" 
   ssl.engine = "enable"
   ssl.pemfile = "/path/to/server.pem"
   server.document-root = "/path/to/web-root"
}

The first section identifies any traffic that reaches port 80 on our webserver (http), and redirects the user to the https version of the site. The second section applies to traffic reaching port 443 (https), enables lighttpdā€™s SSL engine and provides the paths to the server.pem file that we generated, and the appropriate content.

Restart lighttpd for the changes to take effect:

sudo /etc/init.d/lighttpd restart

And thatā€™s it! Traffic to our site is now encrypted using SSL.

Identity Verification

As I alluded to earlier in the article though, thereā€™s a problem. When you navigate to the site in your browser you see (depending on your browser of choice) something similar to the following on your screen.

image

Itā€™s not particularly specific, but clearly Chrome has an issue with our setup.

The problem here is the one I alluded to earlier, and if you click the padlock icon in the address bar then Chrome will give you some additional details that show you what I mean.

image

Our connection is indeed secured by SSL as weā€™d hoped, but Chrome has been unable to verify the identity of the website weā€™re connecting to. This is not a surprise ā€“ since we created the SSL certificate ourselves, our browser has no means of knowing if the certificate should be trusted or not. This is why self-signed certificates are not suitable for public, production websites.

Since this is site is going to me for my use only, I can live with it. The important thing is that my connection is encrypted, and if I hit the Advanced link then I have an option to ignore the warnings and proceed to the site. I donā€™t want to do that every time if I can avoid it though, and the solution is to add the siteā€™s SSL certificate to your computerā€™s Trusted Root Certificate Authorities store. Chrome (on Windows) and Internet Explorer both use this same location when checking the validity of SSL certificates, so the easiest way to go about doing this is actually to open the site in Internet Explorer and then complete the following steps which I took from a helpful user on stackoverflow:

  1. Browse to the site whose certificate you want to trust.
  2. When told There is a problem with this website’s security certificate, choose Continue to this website (not recommended).
  3. Click on Certificate Error at the right of the address bar and select View certificates.
  4. Click on Install Certificate… then in the wizard, click Next.
  5. On the next page select Place all certificates in the following store.
  6. Click Browse, select Trusted Root Certification Authorities, and click OK.
  7. Back in the wizard, click Next, then Finish.
  8. If you get a security warning message box, click Yes.
  9. Dismiss the message box with OK.
  10. Restart your computer.
  11. When you return to the site in either internet explorer or Chrome, you should find that the certificate is now trusted.

All done!

Blog

Learn Version Control with Git

Recently I’ve been reading the eBook “Learn Version Control with Git” online, and I’d recommend it.

image

I’ve been using Git for a while, but certainly not to its full potential, and not even really for its intended purpose. The book is great because it:

“…doesn’t require a master’s degree in computer science to read it. It’s aimed at beginners of programming, at designers, at project managers… It tries not to require too much prior knowledge on the technical side. It tries to go slowly.”

You can read it online for free, or purchase it in PDF, ePub or Mobi format for your device.

Enjoy!

Blog

Making Google Analytics Work for Me (and You)

When I put my website together back whenever it was that I did that, I knew I wanted to get analytics from it: at the beginning the site was fairly simple (this blog, for example, was an entirely separate entity back then and it wasn’t integrated into the site in the way it is today), but from the start I wanted to know how many visitors I was getting, where they were in the world, how they were finding me, and a little about how they were interacting with my site.

Iā€™d used Google Analytics on past projects, but this time around I felt a little uneasy about providing Google with an easy way to gather data on all my site visitors. Those guys have enough power without me contributing. I went with clicky.com for my analytics, and all was well.

In researching this post I found an article called Seven Reasons Why You Should NOT Use Google Analytics. My concerns about giving Google too much power rank number four in their list, but they ultimately reach the same conclusion I did ā€“ Googleā€™s product offering in this space is simply better than the alternatives out there, especially when you consider the price (free). With Clicky the basic service is free but limited ā€“ you need to fork over some cash if your site generates a lot of traffic, or you want to retain data for longer than 31 days, or add advanced featuresā€¦ the list goes on.

I switched back to Googleā€™s service a couple of weeks ago and I havenā€™t looked back. While I was at it I not only added the relevant code to this site, I also added it to Floā€™s blog and the jnf.me landing page. Clicky limited me to tracking a single site but Google doesn’t, so why not?

image

For a website like mine adding the relevant JavaScript to the site and then forgetting about it is a reasonable approach, butĀ I’veĀ discovered veryĀ quickly that if youā€™re prepared to put in a little more effort then you can get much improved results. For me, this was highlighted by the extremely limited usefulness of the dataĀ I’veĀ been getting from JNF.me, but the way Iā€™m solving that problem could apply anywhere. Read on!

The Problem

When I bought the domain jnf.me my primary concern was getting something short. My plan all along was to use sub-domains for the various bits of content that lived under it (www.jason.jnf.me, www.asiancwgrl.jnf.me, and so on). The J stands for Jason, the F for Flo, and the N for ā€˜n, but thatā€™s not really relevant. Since it is the root of my domain, I knew I should put something there so I created a quick, fairly simple, single-page site. The page is divided into two with me on the left and Flo on the right, and if you click one of our faces then the whole thing slides over to reveal a little about us and some links to our online content.

In terms of analytics data, the very fact that this is a single-page site is whatā€™s causing issues. With a larger like jason.jnf.me even taking the most basic approach to installing Google Analytics tells me, for example, that the average visitor views three pages. I know which pages are the most popular, which blog topics generate the most interest, and so on.

With JNF.me I know that people visit the page and then their next action is to leave again ā€“ but of course it is, there is only that one page.

What are they doing while theyā€™re there? Are they leaving through one of the links on the page? I have no idea, but I can find out.

Manually Sending Pageviews

The first thing I opted to do was manually send a pageview to Google Analytics when somebody clicks one of our pictures to slide out the relevant content from the side of the page.

My rationale for this approach is that if this were a site with a more traditional design, clicking a link to view more content from the site would indeed cause another page to be loaded. The fact that my fancy design results in the content sliding in from the side instead really makes no difference.

The approach is extremely simple, and adding a single line of JavaScript to the code that makes the content slide in is all it took:

ga('send', 'pageview', {'page': '/' + p });

So how does this work? ga() is a function that Google Analytics creates when itā€™s first loaded by the page, and in fact if youā€™re using Google Analytics at all then youā€™re already using this. Letā€™s take a quick look at the code Google has you paste into your page in order to start feeding data to Analytics in the first place. It ends with these two lines:

ga('create', 'UA-XXXXXXXX-X', 'auto');
ga('send', 'pageview');

The first line initializes things and lets Google know (via the UA-XXXXXXXX-X bit) which Analytics account itā€™s going to be getting data for. The second line sends a pageview to Analytics because, well, if the code is being executed then that means somebody is viewing the page.

By default Analytics makes the perfectly reasonable assumption that the page that executes this code is the one it should be recording a pageview for, but hereā€™s the thing: it doesn’t have be that way.

Back to my example, and youā€™ll notice I’ve added a third argument to the ga() function call. Googleā€™s help page on the subject discusses the options in terms of possible parameters, but essentially what Iā€™m doing is passing a JavaScript object that describes exactly what Analytics should track. The page field is the page address against which a pageview is registered, and the p variable is used elsewhere in my code that makes the sliding content work: it stands for person, and it contains either ā€œjasonā€ or ā€œfloā€ as appropriate.

The important thing to note here is that these pages donā€™t exist ā€“ there is nothing on my website at either /jason or /flo ā€“ but this doesn’t matter. Analytics registers a pageview for one of these addresses anyway, and I know when I see it in my data that it means that somebody opened the sliding content.

Sending Events

In addition to sending pageviews to Analytics you can also send events, and this is the approach I took to help me understand how people are leaving the page.

When I first started learning about events I spent some time trying to understand the right way to use them. Googleā€™s Event Tracking help page provides an example, and you can find some good reading material about it on the web. The conclusion I’ve reached from my brief research is that there is no ā€œrightā€ way to use events ā€“ you just define them in whatever way works best for you, your site, and your desired outcome.

The important thing to know is that events have, as a minimum, an associated category and action. You can also optionally define a label and a value.

I can see that the value parameter would be extremely useful in some scenarios, such as tracking e-commerce sales (you could, for example, use Analytics to track which traffic sources result in the highest sales figures in this way) but I donā€™t need that. I will be using the other three parameters, though.

When you view data regarding events in the Analytics interface, theyā€™re in something of a hierarchical structure. Categories are treated separately from one another, but you can view summary data at the category level, then drill-down to segment that data by action, then drill down further to segment by label.

For the events fired when a site visitor clicks an external link on my page I arbitrarily decided that the category would be ā€˜extlink,ā€™ the action would be the person the link relates to (either jason or flo), and the label would be indicative of the link destination itself (blog, twitter, etc).

To implement this, the first thing I did was add a class and a custom data attribute to the links on the page:

<a href="http://twitter.com/JayWll" class="outbound" data-track="jason/twitter">Twitter</a>

The class of outbound defines this as an outbound link as opposed to one of the links that helps visitors navigate around the page, slide content in and out, etc, and the data-track attribute defines what will become the eventā€™s action and label.

Next, the JavaScript. This time around itā€™s slightly more in-depth than the single line of code we used to send a pageview. Thatā€™s not necessarily a function of events as compared to pageviews, but itā€™s due to the nature of what Iā€™m tracking here: when a user clicks a link that takes them away from the current page, they (by default) leave immediately. In order to track outbound links, I actually need to hold them up and make sure the event is registered with Analytics before I let them go anywhere. Happily, Google has thought of that and the ga() function accepts a hitCallback property. This is a function that’s fired only once the event has been properly recorded.

Hereā€™s my code:

$('a.outbound').click(function(e) {
   e.preventDefault();
   trURL = $(this).attr('data-track');
   nvURL = $(this).attr('href');

   ga('send', 'event', {
      'eventCategory': 'extlink',
      'eventAction': trURL.split('/')[0],
      'eventLabel': trURL.split('/')[1],
      'nonInteraction': 1,
      'hitCallback': function() {
         location.href = nvURL;
      }
   });
});

The first thing I do is prevent the linkā€™s default behaviour with the line

e.preventDefault();

Next, I capture the linkā€™s data-track and href attributes ā€“ weā€™ll need both of those later.

Finally, weā€™re back to the ga() function to send data to Analytics. We send an event, and define its parameters within the JavaScript object: the category is ā€˜extlink,ā€™ the action and label are obtained by splitting the linkā€™s data-track attribute, we define this as a non-interaction event (LMGTFY) and, once this data has been successfully sent, the hitCallback function is executed which takes us to the page specified by the linkā€™s href attribute.

Easy, when you know how.

Taking it Further

The possibilities here are endless, and how use them really depends on your site and the data youā€™d like to get from it. My plan is to take some of what I’ve learned for jnf.me and extend it to this site, particularly in regards to event tracking.

In addition to tracking outbound links, I have two other ideas for how I might use this:

  1. Page length and scroll tracking
    Some of my posts ā€“ this one is potentially a prime example ā€“ are pretty long. I do tend to ramble on a bit at times. If a post is more than, say, two screen heights in length then I could track how many people scroll beyond the halfway point and how many people scroll to the end to help me understand if my audience is OK with long posts or if I should split in-depth content into some kind of mini-series.
  2. Form tracking
    Thereā€™s a contact me page on this site, and each post in this blog has a comment form at the bottom. With events I could gain a much better understanding of how these are working and how visitors interact with these forms. For example, do people begin filling out the contact me form but then abandon it at some point before submitting? Do people begin to write comments on my posts but then refrain from posting it when they find out I require them to at least provide their email address?

Hopefully you have ideas for how you can use these techniques to provide better insight into visitor behaviour on your site too. Come back here and leave a comment to let me know how it goes! I do require your email address for that, but I promise not to spam you or pass it on to any third party.

Blog

Integrating Microsoft Office Functionality into Your SharePoint Apps

Have you ever come across this snippet of JavaScript before?

try {
   xhr = new XMLHttpRequest();
} catch(e) {
   xhr = new ActiveXObject(ā€œMicrosoft.XMLHTTPā€);
}

If you havenā€™t then not to worry, itā€™s not really the point of this post anyway ā€“ but for the uninitiated this is a snippet of JavaScript that prepares an object that will be used to make an ajax request. Ajax is the mechanism by which a webpage can load more data from the server even after the page itself has finished loading, and you find it all over the place.

The reason I bring it up is because of the tryā€¦ catch construct involved. Essentially it tries to execute the code between the first set of curly braces, and if that fails then it executes the code between the second set instead. In the huge majority of cases the first line of code executes successfully, so why is the second line necessary?

Microsoft. And more specifically, Internet Explorer 6.

For anybody who makes things that live on the web, supporting Internet Explorer 6 is an absolute chore. It does things differently to other (read ā€œstandards complaintā€) browsers, and you end up having to create everything twice and put a bunch of hacks in place to make your site work with it. The block of code above is a prime example: every other browser has the XMLHttpRequest object built into their JavaScript implementation, but for Microsoft you have to use an ActiveX object instead.

More recent versions of Internet Explorer are much better in this regard, and these days frameworks like jQuery take care of any little annoyances like this that remain so itā€™s not as big a deal as it was a decade ago, but thatā€™s not the point of this post either.

The point is that ActiveX remains a part of Internet Explorer to this day, and despite the fact that many web programmers know it primarily as a result of the XMLHttp annoyance described above, it does have its uses. So letā€™s exploit it.

The Downside

If youā€™ve ever created web content then youā€™ll know the importance of standards compliance. Writing compliant markup and code helps to ensure that your site works in whatever browser software your end users happen to be using. Thatā€™s a good thing.

ActiveX exists only within Internet Explorer. If visitors to your SharePoint webapp are using another browser such as Chrome or Firefox then none of the following example code is going to work for them. The best weā€™ll be able to do is detect that it hasnā€™t worked and have our app react in an appropriate way, maybe with a warning message or something similar.

If we were creating something publicly accessible for consumption by a wide variety of internet users then this would be a deal breaker, but in the context of an app built atop SharePoint where all the users are within the four (physical or otherwise) walls of your organization? It might be fine. If your organization is like the one I work for then everybody is using Internet Explorer anyway, because thatā€™s the browser installed on your computer when IT deliver it, and getting them to install an alternative is like pulling teeth.

OK, So What is This ActiveX Thing?

Wikipedia sums it up pretty nicely, including condensing my previous three paragraphs down into a single sentence.

Many Microsoft Windows applications ā€” including many of those from Microsoft itself, such as Internet Explorer, Microsoft Office, Microsoft Visual Studio, and Windows Media Player ā€” use ActiveX controls to build their feature-set and also encapsulate their own functionality as ActiveX controls which can then be embedded into other applications. Internet Explorer also allows the embedding of ActiveX controls in web pages.

However, ActiveX will not work on all platforms, so using ActiveX controls to implement essential functionality of a web page restricts its usefulness.

In other words, one of the things we can use ActiveX for is to take the functionality of a Microsoft application and embed it into a web page. Iā€™m going to put together a fairly simple example, and Iā€™m going to use Excel. Letā€™s dive in!

The HTML

Iā€™m going to build a simple table in HTML. The data in the table could come from anywhere, such as a subset of some SharePoint-based dataset or other pulled together using some of the techniques weā€™ve looked at previously, or a script executed server-side if you have the ability to create such a thing. For the sake of simplicity though, Iā€™m just going to define in a static manner in my markup ā€“ and Iā€™m not going to worry about making it look good.

<table>
   <thead>
      <tr><th>Fruit</th><th>Qty</th></tr>
   </thead>
   <tbody>
      <tr><td>Bananas</td><td>5</td></tr>
      <tr><td>Apples</td><td>7</td></tr>
      <tr><td>Oranges</td><td>2</td></tr>
      <tr><td>Pears</td><td>3</td></tr>
   </tbody>
</table>

image

The Template

Next, Iā€™m going to define an excel template that weā€™ll use to place our data into. This could be as detailed or as simple as necessary, so for the purposes of example Iā€™ve gone with simple again. All Iā€™ve done in mine is put headings at the top of column A and B that match the headings in our HTML.

image

The JavaScript

OK, hereā€™s where the clever bit starts. The first thing weā€™re going to do is create an ActiveX object pointing to excel, and assign it to a variable so we can reference it again further along in the code.

var exApp = new ActiveXObject('Excel.Application');

Next weā€™re going to open a new document in excel, based upon our template. Doing this also returns an object that weā€™ll need again, so weā€™re going to assign this one to a variable too.

var exDoc = exApp.Workbooks.Add('http://192.168.1.102/web/template.xltx');

Itā€™s important to note here that we have to pass in an absolute reference to the template file ā€“ a relative reference is not sufficient because excel has no concept of the location of our webpage. In my test environment the SharePoint server is at 192.168.1.102, but this will undoubtedly be different for you.

At this point, excel is open and has our template loaded, so the next thing to do is iterate over the table in our HTML and plug the data into excel. In general, this is done with the following line of code:

exDoc.ActiveSheet.Cells(1, 1).Value = 'This is row 1, column 1!';

More specifically what weā€™re going to do is use our old friend jQuery to iterate over the table cells in our HTML page and put them into the right place in excel with the help of a couple of simple counter variables: one for the row weā€™re targeting, and one for the column. Donā€™t forget to include a reference to the jQuery library in the document <head> section.

r = 2;
$('table tbody tr').each(function() {
   c = 1;
   $(this).children().each(function() {
      exDoc.ActiveSheet.Cells(r, c).Value = $(this).html();
      c++;
   });
   r++;
});

While weā€™ve been preparing all this, the Excel window has been invisible to the user. The final step is to make it and its newly imported data appear.

exApp.Visible = true;

Done!

Putting it All Together

The full code of our HTML page is as follows:

<!DOCTYPE html>
<html>
<head>
   <title>Export to Excel Example</title>
   /web/js/jquery-1.11.0.min.js
   
      $(document).ready(function() {
         $('input#export').click(function() {
            var exApp = new ActiveXObject('Excel.Application');
            var exDoc = exApp.Workbooks.Add('http://192.168.1.102/web/template.xltx');

            r = 2;
            $('table tbody tr').each(function() {
               c = 1;
               $(this).children().each(function() {
                  exDoc.ActiveSheet.Cells(r, c).Value = $(this).html();
                  c++;
               });
               r++;
            });

            exApp.Visible = true;
         });
      });
   
</head>
<body>
   <table>
      <thead>
         <tr><th>Fruit</th><th>Qty</th></tr>
      </thead>
      <tbody>
         <tr><td>Bananas</td><td>5</td></tr>
         <tr><td>Apples</td><td>7</td></tr>
         <tr><td>Oranges</td><td>2</td></tr>
         <tr><td>Pears</td><td>3</td></tr>
      </tbody>
   </table>

   <input type="button" id="export" value="Export to Excel">
</body>
</html>

Taking it Further

What Iā€™ve put together here is a pretty simple example, but hopefully you can see the value in some of the possibilities this opens up. Getting complex data from a webapp into Excel is actually fairly straightforward.

With a more detailed template to export data into you could prepare webapp data for analysis in excel, create charts, etc, etc.

Enjoy!

Blog

SharePoint Development: Lesson 4

Welcome back to my ongoing series on SharePoint development!

I promised last week that Iā€™d follow up lesson three within a week, so here we are! In that previous lesson everything we did was in relation to setting the stage for what weā€™re going to tackle today, so Iā€™ve got no doubt that youā€™ve been waiting with barely contained anticipation to get back to writing some code.

Because all the steps from last week were fairly standard SharePoint stuff, if youā€™re pretty familiar with the platform you may well have skimmed through it. Thatā€™s fine, but for everything we do today to be successful weā€™re going to need to make sure that:

  • Weā€™ve created and populated a list of shipping prices, with different shipping profiles (in our example, these different profiles are based on destination).
  • Uploaded the SPServices library minified javascript to our ā€œwebā€ document library.
  • Located and noted the GUID of the shipping prices list, so that we can programmatically reference it in our code.

With those steps done everything is in place, so letā€™s not waste any more time!

The HTML

The HTML portion of our code hasnā€™t changed very much since we wrote it way back in lesson one, and it doesnā€™t change much today either. That being said, we do have an additional option that forms part of our calculation this time around, so weā€™ll add a form element for that.

Destination province:
Please wait, loading data...

Item weight:
lbs

Shipping cost: $0

As you can see, weā€™ve added a drop-down menu that allows for the selection of the destination province. In the HTML it has a single entry in the list that says ā€œPlease wait, loading dataā€¦ā€ As you might anticipate, weā€™re going to remove this option later on and replace it with the actual choices, but itā€™s probably good practice to have something there because the SPServices library is going to read from our list with an AJAX HTTP request ā€“ in other words the page will load first (with our ā€œplease waitā€ message), and the data to populate the list will be loaded afterwards. Hopefully it will all happen so fast that nobody ever really sees the ā€œloadingā€ message, but you never know.

The JavaScript

OK, now weā€™re really going to start getting into some changes in functionality! First things first, though. We need to include the external libraries weā€™re going to be using. jQuery has been there from the start of our journey, but SPServices is new for today.

/web/js/jquery-1.11.0.min.js
/web/js/jquery.SPServices-2014.01.min.js

The other thing Iā€™m going to do at the beginning of my main block is declare a global variable in which to store list data. There are probably better approaches than this, but for the sake of keeping my code relatively simple this is one Iā€™m taking.

var shippingData = false;

Loading Data from our SharePoint List

Next is where the SPServices magic happens. Immediately inside our jQuery document ready function weā€™re going to call SPServices and grab the data from our list, putting the result of that request inside our shippingData variable. SPServices has many available options that get passed as an object to its main function and more details can be found in the documentation on their website. Like I said though, Iā€™m keeping things simple:

$().SPServices({
   operation: 'GetListItems',
   listName: '{4883AC18-E2A5-4EAF-8446-23B15B43861A}',
   completefunc: function(xData, Status) {
      if (Status == 'success' && ($(xData.responseXML).find('ErrorCode').text() == '0x00000000' || $(xData.responseXML).find('ErrorCode').text() == '')) {
         shippingData = $(xData.responseXML).SPFilterNode('z:row');
         populateDropdown();
      } else alert('Something went wrong!');
   }
});

Not bad, eh? Ten lines of code and weā€™ve read all the data from our SharePoint list. Letā€™s look at what weā€™ve done in a bit more detail.

The code weā€™ve written can be summarized as:

$().SPServices(obj);

All weā€™re doing is calling the SPServices plugin and passing in a javascript object that contains all the options it needs to understand what we want it to do. There are many options we could pass to it, and youā€™ll find more detailed documentation on the SPServices homepage. Iā€™ve kept things as simple as possible and passed in the bare minimum.

Operation: ā€˜GetListItemsā€™

In our example weā€™re dealing with items in a list. GetListItems is the operation we need to read data from a SharePoint list into our webapp. There are many other types of operation related to lists that SPServices could do for us ā€“ creating entirely new lists, adding or removing list fields, deleting lists, etc. Essentially almost anything you could do manually on SharePoint could be done programmatically with SPServices. If we wanted to write data back to a list then UpdateListItems would be the operation weā€™d use.

ListName: '{4883AC18-E2A5-4EAF-8446-23B15B43861A}'

The ListName parameter could take one of several formats. A simple string containing the name of the list will work, but to avoid any kind of confusion between similarly named lists my preference is to pass in the GUID of the list. Remember that the GUID of your list will be different to mine. SPServices can also take a WebURL parameter that tells it which site in your SharePoint collection the list can be found on, but since weā€™re using a GUID thatā€™s unique across all sites in the collection we donā€™t need that.

completefunc:Ā  function(xData, Status)

This is where the real magic happens. The completefunc parameter represents a callback function that SPServices executes once data has been loaded, and it takes two parameters: xData and Status.

Our completefunc does some basic error handling, and then calls another function to do the dirty work.

if (Status == 'success' && ($(xData.responseXML).find('ErrorCode').text() == '0x00000000' || $(xData.responseXML).find('ErrorCode').text() == ''))ā€¦

For the Status parameter thatā€™s passed in to completefunc, weā€™re looking for it to contain a value of ā€˜successā€™. We have to be a little careful about exactly what this means though: To SPServices success means that itā€™s passed a query to SharePoint and received a response. It doesnā€™t mean that our query was well formed, or that we received any useful data back. Basically youā€™ll always get a successful status unless SharePoint is down ā€“ in which case our webapp probably wouldnā€™t be available to users anyway.

To check that our query has truly been executed successfully by SharePoint, we look in the responseXML for an error code. Depending on the version of SharePoint weā€™re running, that will either be 0x00000000 or blank.

shippingData = $(xData.responseXML).SPFilterNode('z:row');

There are few ways we could approach what happens next. My goal was to keep my code as simple as possible so Iā€™m doing some things that may not be best practice. This is one of them: we take the data SPServices has returned and put it into a global variable.

The data SharePoint has passed back to us is in XML format and contains a wealth of information about our query, metadata about the response, and so on. We donā€™t really need any of this stuff ā€“ we just want the data itself ā€“ so SPServices has a function called SPFIlterNode that helps us filter the returned data down to what we actually care about. Weā€™re filtering here by z:row. Each z:row returned represents one entry from our SharePoint list.

populateDropdown();

Now that we have our data in a globally accessible variable, Iā€™m outsourcing the processing of it to another function: populateDropdown. That last step for our completefunc is to call this function.

Populating the Dropdown List

OK! So now we have the data we need loaded from our SharePoint list and resident in memory (in our shippingData global variable) so that we can manipulate it and our users can interact with it. The first step of this process is to populate the relevant options into the select box we created in our HTML. Iā€™m doing that (surprise surprise) with the populateDropdown function.

function populateDropdown() {
   $('select#destination option').remove();

   for (i = 0; i ' + $(shippingData[i]).attr('ows_Title') + '');
   }
}

If you recall, the dropdown list initially contains a single option with the text Please wait, loading dataā€¦. At this point in the story our data is loaded, so letā€™s get rid of that option first:

$('select#destination option').remove();

Done! Good. The next step is to loop through each of the items weā€™ve loaded into our shippingData variable, and add them as an option in the dropdown. We do this with a standard for loop:

for (i = 0; i 

If youā€™re not familiar, this construct sets a variable i to zero, then loops through the following block of code multiple times, as long as i is less than shippingData.length, which is the number of items in our shippingData variable. On each iteration i is incremented by one (i++).

On each loop we add an item to our dropdown, using jQuery to append the relevant HTML. Based on the data that exists within our SharePoint list, we end up with these options in our dropdown:

Each $(shippingData[n]) has an attribute for each of the columns in our list. These attributes all have the prefix ows_, and, as mentioned in lesson 3, they are all referenced by the original name of that column (even if itā€™s been renamed since it was created). Thatā€™s why weā€™re using the attribute ows_Title to get at the data thatā€™s in our Province column: the column was called Title when the list was created, and we renamed it.

Performing the Calculation

With any luck everything in our story up to this point will have happened in a split second, but regardless weā€™re now ready for user input. The user selects the destination province, inputs the weight of the item being shipped, and hits the calculate button.

The calculation itself is really no different from the one we built in lesson 1, the difference being that our variables are defined by the list data rather than hardcoded in.

We still identify that the Calculate button has been pressed through the use of the jQuery $(ā€˜input#calculateā€™).click(function() {});, but our first step is now to set some variables based on whatever is selected in the destination province dropdown at the time.

bp = parseFloat($(shippingData[$('select#destination').val()]).attr('ows_BasePrice'));
bw = parseInt($(shippingData[$('select#destination').val()]).attr('ows_BaseWeight'));
ap = parseFloat($(shippingData[$('select#destination').val()]).attr('ows_AdditionalPrice'));
aw = parseInt($(shippingData[$('select#destination').val()]).attr('ows_AdditionalWeight'));

We set each of these variables by reading the relevant attribute (representing the column in our SharePoint list) from $(shippingData[n]), where n is the value of the destination dropdown, $('select#destinationā€™).val(). After that, itā€™s business as usual:

var shippingcost = bp;

if ($('input#itemweight').val() > bw) {
   shippingcost += (Math.ceil(($('input#itemweight').val() - bw) / aw) * ap);
}

$('span#shippingcost').html(shippingcost);

Putting it All Together

And weā€™re done! The completed code block ā€“ including all the javascript and HTML ā€“ that we copy and paste into our content editor webpart is as follows:

/web/js/jquery-1.11.0.min.js
/web/js/jquery.SPServices-2014.01.min.js

   var shippingData = false;

   $(document).ready(function() {
      $().SPServices({
         operation: 'GetListItems',
         listName: '{4883AC18-E2A5-4EAF-8446-23B15B43861A}',
         completefunc: function(xData, Status) {
            if (Status == 'success' && ($(xData.responseXML).find('ErrorCode').text() == '0x00000000' || $(xData.responseXML).find('ErrorCode').text() == '')) {
               shippingData = $(xData.responseXML).SPFilterNode('z:row');
               populateDropdown();
            } else alert('Something went wrong!');
         }
      });

      $('input#calculate').click(function() {
         bp = parseFloat($(shippingData[$('select#destination').val()]).attr('ows_BasePrice'));
         bw = parseInt($(shippingData[$('select#destination').val()]).attr('ows_BaseWeight'));
         ap = parseFloat($(shippingData[$('select#destination').val()]).attr('ows_AdditionalPrice'));
         aw = parseInt($(shippingData[$('select#destination').val()]).attr('ows_AdditionalWeight'));

         var shippingcost = bp;

         if ($('input#itemweight').val() > bw) {
            shippingcost += (Math.ceil(($('input#itemweight').val() - bw) / aw) * ap);
         }

         $('span#shippingcost').html(shippingcost);
      });
   });

   function populateDropdown() {
      $('select#destination option').remove();

      for (i = 0; i ' + $(shippingData[i]).attr('ows_Title') + '');
      }
   }


Destination province:
Please wait, loading data...

Item weight:
lbs

Shipping cost: $0

Taking it Further

The next steps with our shipping calculator app would be to add some additional error-checking and handling, and maybe amend the code to avoid using unnecessary global variables. Iā€™ve kept things as simple as possible here for the sake of example.

After that? As I mentioned earlier, thereā€™s a lot of cool stuff we can do with SPServices. Where you go from here is really up to you, but hopefully you can see some possibilities. Even with the basic building blocks of reading from and writing to lists, itā€™s possible to build some really cool stuff on top of SharePoint, possibly even taking the approach of using SharePoint as a database for a webapp that has its own look and feel.

Enjoy!

Blog

SharePoint Development: Lesson 3

Welcome back to my continuing series on SharePoint development!

If youā€™ve been following along with lesson one and lesson two then this post has probably been a long time coming, but weā€™re now at the point where weā€™ve built a pretty useful tool for calculating shipping costs, and weā€™ve integrated it into our existing SharePoint site to make it easily accessible to everyone in our team who might benefit from it.

Hereā€™s the thing, though. If your organization is anything like mine, then SharePoint is a tool that theyā€™ve made available to everybody. Youā€™re using it for some cool stuff, but writing code probably isnā€™t your day job ā€“ youā€™re the techy guy or gal in your group whoā€™s found an opportunity to make everybodyā€™s life a little easier with technology, and the beautiful part is that you can do it all without needing to engage your companyā€™s IT team (who are busy with large-scale projects involving a contribution to your companyā€™s bottom line, which your idea for a shipping calculator would need to be prioritized against).

Thereā€™s nothing wrong with any of that. Something like this really shouldnā€™t be a thing that your companyā€™s IT team get involved with in the exact same way that helping you craft an especially complex excel formula shouldnā€™t be a job for them either. If youā€™ve ever crafted an especially complex excel formula in a workbook thatā€™s shared throughout your group though then you may already have identified the downside to this approach: things change.

Our tool is built on a fixed model for calculating shipping costs of $19.99 for the first 20lbs and $3 for every 5lbs (or part thereof) over and above that. That rule is embedded within your code now, youā€™re the only one around with the necessary technical knowledge to update it when shipping costs change, and you have a day job to worry about too. If updating SharePoint tools is not how you like to spend your weekends, then we need a different approach.

What we need, then, is a solution where the average SharePoint user can make changes to key pieces of data, and our tool needs to be smart enough to read that data so that it can be used in calculations. And, while weā€™re at it, letā€™s expand the tool so that it can handle a few different shipping profiles (which could represent different couriers or, in our example, destinations).

Enter the SPServices library. SPServices is a jQuery plugin thatā€™s used to expose SharePoint data to our jQuery apps, including (amongst other functions) reading from and writing to SharePoint lists in SharePoint 2007, 2010 and 2013.

This is a significant step up from where we were at the end of lesson two (which is probably why Iā€™ve been procrastinating over writing it for so many weeks). Iā€™ve split it into parts. Today weā€™re going to set the stage and prepare our data, and this time next week (I promise!) weā€™re going to get our hands dirty with some code.

Nevertheless, both this post and its successor are probably going to be longer than those that have gone before, so be forewarned, go grab yourself a cup of coffee, and letā€™s dive in!

Creating the List

First things first, we need a list to hold our data. This list is where our less-technical colleagues will come when changes need to be made and weā€™ll keep it fairly straightforward.

Much like we did to create our document library in lesson one, go to Site Actions > View All Site Content and hit the Create button. This time weā€™re going to choose Custom List as the type of entity weā€™re going to create. We need to give the list a name, so letā€™s call it ā€œShipping Prices.ā€ For the time being weā€™ll leave the description blank, and weā€™ll hide our list from the Quick Launch bar. We can always change these options later.

image

When you hit the Create button the list will be created and will have a single column (ā€œTitleā€). We need to add a few more columns, so choose List Settings from the toolbar or the Actions menu (depending on your version of SharePoint). The first thing weā€™re going to do is rename the ā€œTitleā€ column to ā€œProvinceā€ by clicking it in the list, then weā€™re going to add four more columns by clicking the Create Column link and adding them one by one. Hereā€™s where we want to end up:

image

An Important Note Regarding Column Names

With this the basic framework for our data is in place. You may notice that none of our column names have spaces in them. Thatā€™s because SharePoint in the backend does strange things with spaces. As youā€™ll see later we can programmatically read from a column called ā€œBasePriceā€ by referring to it exactly in that way, whereas a column called ā€œBase Priceā€ would need to be referred to in our code as ā€œBase_x0020_Price.ā€

That being said, we can leverage a bit of trick here if want to improve readability for people who will interact with this list directly (our less-technical colleagues, remember). Behind the scenes (and in our code) SharePoint will always know the column by its original name, even if itā€™s subsequently been renamed. If we go back and spaces now, the internal name of the ā€œBasePriceā€ column will remain ā€œBasePrice,ā€ even if its display name is changed to ā€œBase Price.ā€

This is helpful in this scenario, but can easily be a bit of a gotcha ā€“ you need to remember the original name of all your columns, because thatā€™s how your code will reference them. Remember the ā€œTitleā€ column we renamed to ā€œProvince?ā€ Itā€™s still ā€œTitleā€ behind the scenes.

Populating the Data

There are many ways to get data into a SharePoint list, and Iā€™m not going to go into a great amount of detail here. You can add each item row by row with the built-in list forms, you can use SharePointā€™s datasheet view to edit many rows at once, or you can use a third party tool. Our example is probably a little basic to warrant breaking out a special third-party tool for, but nevertheless as you do more complex stuff in the future Iā€™m a fan of SharePoint List Item Editor. It does exactly what its name suggests, gives you a spreadsheet-like interface for editing items in SharePoint lists, and makes it easy to copy and paste many rows at once.

Regardless of how we do it, hereā€™s the data Iā€™m going to put into my list for the purposes of this example.

image

With this data in place youā€™re probably starting to get a sense of where things are going with this example. The BasePrice is the cost of shipping the first BaseWeight pounds, and the AdditionalPrice is the cost of each AdditionalWeight pounds or part thereof.

In many ways itā€™s no different from what weā€™d created by the end of lesson two, but with one critical difference ā€“ none of these variables are going to live in our code anymore. Theyā€™re all factored out into the list where theyā€™re easily editable when things need adjusting in the future.

image

Find The GUID of the List

Everything weā€™ve done so far is fairly standard SharePointy stuff, but youā€™ll notice weā€™ve had one eye on the end goal of programmatically interfacing with this data throughout. Finding the GUID of the list weā€™ve created is an important step in this process.

A GUID is a globally unique identifier, and every SharePoint list (or calendar, or document library, etc) on our SharePoint site collection has one. There are several ways we can connect our front-end custom interface to our back-end data, but the GUID is probably the most reliable because as the name implies, itā€™s globally unique. Itā€™s also not affected if our list gets renamed later.

There are several ways to find it, but my favourite is to use a simple tool I found for the purpose. Open the app and plug in the URL of your SharePoint site. Hit the Display List Titles and IDs button, and grab the relevant ID (including the opening and closing braces).

image

For me {4883AC18-E2A5-4EAF-8446-23B15B43861A} is what I need. For you it will be different. You may notice that the tool can also find the GUID of a list view. We donā€™t need this because weā€™re going to use the default view. Iā€™m not going to get into it right now, but if youā€™re unfamiliar SharePoint views define things like the sort order and filter thatā€™s applied to a list, and each list can have multiple views defined. If you want to programmatically access data thatā€™s filtered out of your default view then the simplest method is probably to create a new, unfiltered, public view and reference it by its GUID in your code. The documentation for SPServices will tell you more.

ā€¦and Talking of SPServices

Now is probably a good time to download the library and place the minified javascript file in the ā€œwebā€ document library we created in lesson one, in the ā€œjsā€ subfolder alongside the jQuery library thatā€™s already there. At the time of writing this file is called jquery.SPServices-2014.01.min.js.Ā With that our stage is set and weā€™re ready to rewrite the code in our content editor web part to interface with it, but that was a lot to take in so weā€™ll get to that next week.