Wednesday, July 6, 2016

Open Source Access to Math for NVDA: The Beginning

At CSUN this year, I attended the open source math accessibility sprint co-hosted by the Shuttleworth Foundation and Benetech, where major players in the field gathered to discuss and hack on various aspects of open source math accessibility. My team, which also included Kathi Fletcher, Volker Sorge and Derek Riemer, tackled reading of mainstream math with open source tools.
Last year, NVDA introduced support for reading and interactive navigation of math content in web browsers and in Microsoft Word and PowerPoint. To facilitate this, NVDA uses MathPlayer 4 from Design Science. While MathPlayer is a great, free solution that is already helping many users, it is closed source, proprietary software, which severely limits its future potential. Thus, there is a great need for a fully open source alternative.
Some time ago, Volker Sorge implemented support for math in ChromeVox and later forked this into a separate project called Speech Rule Engine (SRE). There were two major pieces to our task:
  1. SRE is a JavaScript library and NVDA is written in python, so we needed to create a "bridge" between NVDA and SRE. We did this by having NVDA run Node.js and writing code in Python and JavaScript which communicated via stdin and stdout.
  2. One of the things that sets MathPlayer above other math accessibility solutions is its use of the more natural ClearSpeak speech style. In contrast, MathSpeak, the speech style used by SRE and others, was designed primarily for dictation and is not well suited to efficient understanding of math, at least without a great deal of training. So, we needed to implement ClearSpeak in SRE. Because this is a massive task that would take months to complete (and this was a one day hackathon!), we chose to implement just a few ClearSpeak rules, just enough to read the quadratic equation.
Our goal for the end of the day was to present NVDA and SRE reading and interactively navigating the quadratic equation in Microsoft Word using ClearSpeak, including one pause in speech specified by ClearSpeak. (ClearSpeak uses pauses to make reading easier and to naturally communicate information about the math expression.) I'm pleased to say we were successful! Obviously, this was very much a "proof of concept" implementation and there is a great deal of further work to be done, both in NVDA and SRE. Thanks to my team for their excellent work and to Benetech and the Shuttleworth Foundation for hosting the event and inviting me!
Shuttleworth Funded logo
As a result of this work, I was subsequently nominated by Kathi Fletcher for a Shuttleworth Foundation Flash Grant. In short, this is a small grant I can put towards a project of my choice, with the only condition being to "live openly" and share it with the world. And I figured polishing NVDA's integration with SRE was a fitting project for this grant. So, in the coming months, I plan to release an NVDA add-on package which allows users to easily install and use this solution. Thanks to Kathi for nominating me and to the Shuttleworth Foundation for supporting this! Watch this space for more details.

Friday, December 18, 2015

Woe-ARIA: aria-describedby: To Report or Not to Report?

Introduction

In my last post, I waxed lyrical about the surprising complexity of the seemingly simple aria-label/ledby. Thanks to those who took the time to read it and provide their valuable thoughts. In particular, Steve Faulkner commented that he’d started working on “doc/test files suggesting what screen readers should announce from accname/description info“. Talk about responsive! Thanks Steve! His inclusion of description opened up another can of worms for me, so I thought I’d continue the trend and let the worms spill out right here. Thankfully, this particular can is somewhat smaller than the first one!

What are you on about this time?

Steve’s new document suggests that for an a tag with an href, screen readers should:
Announce accname + accdescription (if present and different from acc name), ignore element content.
I don’t agree with the “ignore element content” bit in all cases; see the “Why not just use the accessible name?” section of my label post for why. However, the bit of interest here is the suggestion that accDescription should be reported.

Well, of course it should! The spec says!

The spec allows elements to be described, so many argue that it logically follows that a supporting screen reader should always read the description. I strongly disagree.
While the label is primary information for many elements (including links), I believe the description is “secondary” information. The ARIA spec says that:
a label should be concise, where a description is intended to provide more verbose information.
“More verbose information” is the key here. It is reasonable to assume that users will not always be interested in this level of verbosity. If the information was important enough to be read always, why not just stick it in the label?

What on earth do you mean by secondary information?

I think of descriptions rather like tooltips. A tooltip isn’t always on screen, but rather, appears only when, say, the user moves their mouse over the associated element. The information is useful, but the user doesn’t always need to see it. They only need to see it if the element is of particular interest.
The HTML title attribute is most often presented as a tooltip and… wait for it… is usually presented as the accessible description (unless there’s no name).

But most screen reader users don’t use a mouse!

Quite so. But moving the mouse to an element can be generalised: some gesture that indicates the user is specifically interested in/wishes to interact with this element. When a user is just reading, they’re not doing this.

Why is this such a big deal?

Imagine you’re reading an article about the changing landscape of device connectors in portable computers over the years:
<p>There have been many different types of connections for peripheral devices in portable computers over the years: <a href="pcmcia" title="Personal Computer Memory Card International Association">PCMCIA</a>, <a href="usb" title="Universal Serial Bus">USB</a> and <a href="sata" title="Serial ATA">E-SATA</a>, just to name a few.</p>
(I use the title attribute here because it’s easier than aria-describedby, but the same could be done with aria-describedby.)
Imagine you’re reading this as a flat document, either line by line or all at once. Let’s check that out with all descriptions reported:
There have been many different types of connections for peripheral devices in portable computers over the years: link, PCMCIA, Personal Computer Memory Card International Association, link, USB, Universal Serial Bus, and link, E-SATA, Serial ATA, just to name a few.
Wow. That’s insanely verbose and not overly useful unless I’m particularly interested in the linked article. And that’s just one small sentence! If sighted users don’t have to see this all the time, why should I as a screen reader user?
Here’s another example based loosely on an issue item in the NVDA GitHub issue list:
<a href="issue/5612">Support for HumanWare Brailliant B using USB HID</a>
<a href="label/Braille" title="View all Braille issues">Braille</a>
<a href="label/enhancement" title="View all enhancement issues">enhancement</a><br>
#5612
opened <span title="16 Dec. 2015, 9:49 am AEST">2 days ago</a>
by <a href="user/jcsteh" title="View all issues opened by jcsteh">jcsteh</a>
Let’s read that entire item with descriptions:
link, Support for HumanWare Brailliant B using USB HID, link, Braille, View all Braille issues, link, enhancement, View all enhancement issues, #5612 opened 2 days ago, 16 Dec. 2015, 9:49 am AEST, by jcsteh, View all issues opened by jcsteh
In what universe is that efficient?

Slight digression: complete misunderstanding of description

As an aside, GitHub’s real implementation of this is actually far worse because they incorrectly use the aria-label attribute where I’ve used the title attribute, so you lose the real labels altogether. You get something like this:
link, Support for HumanWare Brailliant B using USB HID, link, View all Braille issues
which doesn’t even make sense. David MacDonald outlined this exact issue in his comment on my label post:
The most common mistake I’m correcting for aria-label/ledby is when it over rides the text in the element, or associated label and when that text or associated html label is important. For instance, a bit of help text on an input. They should use describedby but they don’t understand the difference between accName and accDescription.
Still, the spec is fairly clear on this point, so I guess this one is just up to evangelism.

So are you saying description should never be read? What’s the point of it, then?

Not at all. I’m saying it shouldn’t “always” be read.

When, then?

When there is “some gesture that indicates the user is specifically interested in/wishes to interact with this element”. For a screen reader, simply moving line by line through a document doesn’t satisfy this. Sure, the user is interacting with the device, but that’s because screen readers inherently require interaction; they aren’t entirely passive like sight. For me (and, surprise surprise, for NVDA), this “gesture” means something like tabbing to the link, moving to it using single letter navigation, using a command to query information about the current element, etc.

But VoiceOver reads it!

With VoiceOver, you usually move to each element individually. You don’t (at least not as often) move line by line (like you do with NVDA), where there can be several elements reported at once. With the individual element model, it makes sense to read the description because you’re dealing with a single element at a time and the user may well be interested in that specific element. And if the user really doesn’t care about it, they can always just move on to the next element early.

So now you’re saying we can’t have interoperability. Dude, make up your mind already!

Recall this from my last post:
If we want interoperability, we need solid rules. I’m not necessarily suggesting that this be compulsory or prescriptive; different AT products have different interaction models and we also need to allow for preferences and innovation.
This is one of those “different interaction models” examples.
Rich Schwerdtfeger commented on my last post:
The problem we have with AT vendors is that many have lobbied very hard for us to NOT dictate what they should do.
Examples like these are one reason AT vendors push back on this.

So, uh, what are we supposed to do?

I’m optimistic that there’s a middle ground: guidelines which allow for reasonable interoperability without restricting AT’s ability to innovate and best suit their users’ needs. As in software development, a bit of well-considered abstraction goes a long way to ensuring future longevity.
In this case, perhaps the guidelines could use the “secondary content” terminology I used above or something similar. They might say that for an a tag with an href, the name should be presented as the primary content if overridden using aria-label/ledby and the description should be treated as secondary content. This leaves it up to the AT vendor to decide exactly when this secondary content is presented based on the interaction model, while still providing some idea of how to best ensure interoperability.

Thursday, December 17, 2015

Woe-ARIA: The Surprisingly but Ridiculously Complicated World of aria-label/ledby

Introduction

WAI-ARIA is one of the best things ever to happen to web accessibility. It paved the way to free us from a world where JavaScript and any widget that didn’t have an HTML tag equated to inaccessibility. Aside from it being deployed by authors, I’ve even managed to majorly improve the accessibility of various websites using Greasemonkey scripts. I love ARIA.
But sometimes, I hate ARIA. Yes, you heard me. I said it. Sometimes, it drives me truly insane.
Let’s take aria-label and aria-labelledby. They’re awesome. Authors can just use them to make screen readers speak the right thing. Simple, right?
Not at all. I wish it were that simple, but it is so, so much more complicated than that. I’ve had a ridiculous number of discussions/arguments about aria-label/aria-labelledby over the years. Frankly, when I hear about aria-label/ledby, it just makes me cringe and groan and, depending on the day, consider quitting my job. (Okay, perhaps that last part is a bit melodramatic.)
The most frustrating part is that people frequently argue that assistive technology products aren’t following the spec when their particular use case doesn’t work as expected. Others bemoan the lack of interoperability between AT products and often blame the AT vendors. But actually, the ARIA spec and guidelines don’t say (not even in terms of recommendations) anything about what ATs should do. They talk only about what browsers should expose, and herein begins a great deal of misunderstanding, argument and confusion. And when we do try to fix one seemingly obvious use case, we often break another seemingly obvious use case.
In this epic ramble, I’ll attempt to explain just how complicated this supeficially trivial issue is, primarily so I can avoid having this argument over and over and over again. While this is specifically related to aria-label/aria-labelledby, it’s worth noting there are similar cans of worms lurking in many other aspects of ARIA. Also, I specifically discuss screen readers with a focus on NVDA in particular, but some of this should still be relevant to other AT.

Why not just use the accessible name?

Essentially, aria-label/ledby alters what a browser exposes as the “name” of an element via accessibility APIs. Furthermore, ARIA specifies when the name should be calculated from the “text” of descendant elements. So before we even get into aria-label/ledby, let’s address the question: why don’t screen raeders just use the name wherever it is present?
The major problem with this is that the “name” is just text. It doesn’t provide any semantic or formatting information.
Take this example:
<a href="foo"><em>bar</em> bas</a>
A browser will expose “bar bas” as the name of the link exactly as you might expect. But that “bar bas” is just text. What about the fact that “bar” was emphasised? If we just take the name, that information is lost. In this example:
<a href="foo"><img src="bar.png" alt="bar"> bas</a>
the name is again “bar bas”. But if we just take the name, the fact that “bar” is a graphic is lost.
These are overly simple, contrived examples, but imagine how this begins to matter once you have more complex content.
In short, content is more than just the name.

Just use it when aria-label/ledby is present.

Okay. So we can’t always use the name. But if aria-label/ledby is present, then we can use the name, right?
Wrong. To disprove this, all we have to do is take a landmark:
<div role="navigation" aria-label="Main">Lots of navigation links here</div>
Now, our screen reader comes along looking for content and sees there’s a name, which it happily uses as the content for the entire element. Oops. All of our navigation links just disappeared. All we have left is “Main”. (Of course, no screen reader actually does or has ever done this as far as I'm aware.)

That’s just silly. You obviously don’t do it for landmarks!

Well, sure, but this raises the question: when do we use it and when don’t we? “Common sense” isn’t sufficient for people, let alone computers. We need clear, unambiguous rules. There is no document which provides any such guidance for AT, so each product has to try to come up with its own rules. And thus, the cracks in the mythical utopia of interoperability begin to emerge.
That really sucks. But enough doom and gloom. Let’s try to come up with some rules here.

Render aria-label/ledby before the real content?

Yup, this would fix the landmark case. It is bad for a case like this, though:
<button aria-label="Close">X</button>
That “X” is meaningless semantically, so the author thoughtfully used aria-label. If we use both the name and content, we’ll get “Close X”. Yuck!

Landmarks are just special. You can still use aria-label/ledby as content for everything else.

Not so much. Consider this tweet-like example:
<li tabindex="-1" aria-labelledby="user message time">
  <a id="user" href="alice">@Alice</a>
  <a id="time" href="6min">6 minutes ago</a>
  <span id="message">Wow. This blog is horrible: <a href="http://blog.jantrid.net/">http://blog.jantrid.net/</a></span>
  <a href="conv">View conversation</a>
  <button>Reply</button>
</li>
Twitter.com uses this technique, though the code is obviously nothing like this. The “li” element is the tweet. It’s focusable and you can move between tweets by pressing j and k. The aria-labelledby means you get a nice, efficient summary experience when navigating between tweets; e.g. the time gets read last, the View conversation and Reply controls are excluded, etc. But if we used the name as content, we’d lose the formatting, links in the message, and the View conversation and Reply controls. If we render the name before the content, we end up with serious duplication.
Believe it or not, I actually have good news this time: yes, you can. But why links and buttons? And what else falls into this category? We need a proper rule here, remember.
There are certain elements such as links, buttons, graphics, headings, tabs and menu items where the content is always what makes sense as the label. While it isn’t clear that it can be used for this determination, the ARIA spec includes a characteristic of “Name From: contents” which neatly categorises these controls.
Thus, we reach our first solid rule: if the ARIA characteristic “Name From: contents” applies, aria-label/ledby should completely override the content.

What about check boxes and radio buttons?

Check boxes and radio buttons don’t quite fit this rule. The problem is that the label is often (but not always) presented separately from the check box element itself, as is the case with the standard HTML input tag:
<input id="Cheese" type="checkbox"><label for="cheese">Cheese</label>
The equivalent using ARIA would be:
<div role="checkbox" aria-labelledby="cheeseLabel">&nbsp;</div><div id="cheeseLabel">Cheese</div>
In most cases, a screen reader will see both the check box and label elements separately. If we say the name should always be rendered for check boxes, we’ll end up with double cheese: the first instance will be the name of the check box, with the second being the label element itself. Duplication is evil, primarily because it causes excessive verbosity.
Okay, so we choose one of them. But which one?

Ignore the label element, obviously. Duh.

Perhaps. In fact, WebKit and derivatives choose to strip out the label element altogether as far as accessibility is concerned in some cases. But what about the formatting and other semantic info?
Let’s try this example in Google Chrome, which has its roots in WebKit:
<input type="checkbox" id="agree"><label for="agree">I agree to the <a href="terms">terms and conditions</a></label>
The label element gets stripped out, leaving a check box and a link. If I read this in NVDA browse mode, I get:
check box not checked, I agree to the terms and conditions, link, Terms and conditions
Ug. That’s horrible. In contrast, this is what we get in Firefox (where the label isn’t stripped):
check box not checked, I agree to the, link, Terms and conditions
Ignoring the label element means we also lose its original position relative to other content. Particularly in tables, this can be really important, since the position of the label in the table might very much help you to understand the structure of the form or aid in navigation of the table.

Fine. So use the label element and ignore the name of the check box.

Great. You just broke this example:
<div role="checkbox" aria-label="Muahahaha">&nbsp;</div>

Make up your mind!

I know, right? The problem is that both of these suck.
The solution I eventually implemented in NVDA is that for check boxes and radio buttons, if the label is invisible, we do render the name as the content for the check box. Finally, another solid rule.

Sweet! And this applies to other form controls too, yeah?

Alas, no. The trouble with other form controls like text boxes, list boxes, combo boxes, sliders, etc. is that their label could never be considered their “content”. Their content is the actual stuff entered into the control; e.g. the text typed into a text box.
If the label is visible, it’s easy: we render the label element and ignore the name of the control. If it isn’t visible, currently, NVDA browse mode doesn’t present it at all.
To solve this, we need to present the label separately. For a flat document representation such as NVDA browse mode, this is tricky, since the label isn’t the “content” of anything. I think the best solution for NVDA here is to present the name of the control as meta information, but only if the label isn’t visible. I haven’t yet implemented this.

Rocking. Can the label override the content for divs, spans and table cells?

No, because if it did, again, we’d lose formatting and semantic info. These elements in particular can contain just about any amount of anything. Do we really want to risk losing that much formatting/info? See the Twitter example above for just a taste of what we might lose.
Another problem with this is the title attribute. Remember I mentioned that aria-label/ledby just alters what the browser exposes as the “name”? The problem is that other things can be exposed as the name, too. If there is no other name, the title attribute will be used if present. I’d say it’s quite likely that the title attribute has been used on quite a lot of divs and spans in the wild, perhaps even table cells. If we replaced the content in this case, that would be… rather unfortunate.
Some have argued that for table cells, we should at least append the aria-label/ledby. Aside from the nasty duplication that might result, this raises a new category of use cases: those where the label should be appended to the content, not overide it. With a new category begin the same questions: what are the rules for this category? And would this make sense for all use cases? It certainly seems sketchy to me, and sketchy just isn’t okay here. Again, we need solid, unambiguous rules.

Stop! Stop! I just can’t take it any more!

Yeah, I hear you. Welcome to my pain! But seriously, I hope this has given some insight into why this stuff is so complicated. It seems so simple when you consider a few use cases, but that simplicity starts to fall apart once you dig a little deeper. Trying to produce “common sense” behaviour for the multitude of use cases becomes extremely difficult, if not downright impossible.
If we want interoperability, we need solid rules. I’m not necessarily suggesting that this be compulsory or prescriptive; different AT products have different interaction models and we also need to allow for preferences and innovation. Right now, though, there’s absolutely nothing.

Friday, May 2, 2014

Deploying a Flask Web App as a Dynamic uWSGI App with Multiple Threads

I recently had to deploy a Flask web app for Hush Little Baby Early Childhood Music Classes (shameless plug) with uWSGI. (Sidenote: Flask + SQLAlchemy + WTForms = awesome.) I ran into an extremely exasperating issue which I thought I'd document here in case anyone else runs into it.

Despite the fact that uWSGI recommends that you run a separate instance for each app, I prefer the dynamic app approach. While i certainly understand why separate instances are recommended, I think per-app instances waste resources, especially when they have a lot of common dependencies, including Python itself. I also set uWSGI to use multiple threads. Unfortunately, with Flask, this is a recipe for disaster.

As soon as Flask is imported by a dynamic app in this configuration, uWSGI instantly hangs and stops responding altogether. The only option is to kill -9. After hours of late night testing, debugging, muttering, cursing, finally going to bed and then more of the same the next day, I finally thought to try disabling threads in uWSGI. And it… worked.

Still, I needed a little bit of concurrency, didn't want to use multiple processes and didn't want to abandon the dynamic app approach. It occurred to me that if it worked fine with per-app instances (I didn't actually test this, but surely someone would have reported such a problem) and a single thread, then it should work if flask were imported before the threading stuff happened. This led me to discover the shared-pyimport option. Sure enough, if I specify flask as a shared import (though a non-shared miport might work just as well), it works even with threads > 1. Horray!

I still don't know if this is a bug in Flask, a Flask dependency or uWSGI or whether it's just a configuration that can never work for reasons I don't understand. I don't really have time to debug it, so I'm just happy I found a solution.

Wednesday, December 19, 2012

Josh's First Meaningful "Mum"?

Josh is fairly clingy with Jen at the moment, especially at night. One evening last week, Jen, Josh and I were all lying in bed, with me cuddling Josh. We were wondering whether Josh would be happy with that this night. Soon after, Josh, who has been babbling mamama for a while now, said with total clarity, "mmmuummm." Surely that was just amusing but unintentional? He doesn't know what Mum means yet. A few seconds later, "mmmmmuuuuummmmm." Right. Even if it was unintentional, how could we resist that? Jen took him and he settled without further protest.

Tuesday, October 9, 2012

Our Lounge Room Entertainment Setup

Introduction

For a while now, we've had a Samsung LCD TV, Samsung Blueray player and Palsonic PVR in our lounge room, as well as an old 2005 notebook running Windows 7 connected to the TV for watching video files, media on the web, etc. I've recently made some major enhancements to this setup. I think they're pretty cool, cost effective and don't require lots of different devices, so I thought I'd document them here.

Decent Audio

TV speakers really suck. For a while now, we've wanted to be able to listen to audio, particularly music, in decent quality. So, after my usual several months of research and deliberation, I bought a set of Audioengine A5+ powered bookshelf speakers. They cost around AU$400 and we're very much loving them. They're quite small and the amp is built into the left speaker, which suits well given the limited space on the TV cabinet. They have dual inputs, enabling both the notebook and TV to be connected simultaneously.

Music

I've used foobar2000 as my audio player for years and saw no reason to diverge from that here. Our music library is now on the notebook and added to foobar2000. In addition, I'm gradually building playlists for various occasions/moods.

Remote Control

Having to interact with the notebook to control music sucks, so I installed the TouchRemote plugin for foobar2000. This enables us to control everything, including browsing and searching the entire library, from our iPhones and iPad using the Remote iOS app. (I could have used iTunes for this, but I despise iTunes. :))

Radio

We don't own a digital radio. However, we mostly listen to ABC radio stations, which all have internet streams. I added all of these internet streams to a separate "Radio Stations" playlist in foobar2000. This shows up in Remote, so listening to radio can be controlled from there too.

AirPlay

Although our music library is on the notebook, there are times when we might have audio on one of our iOS devices which we want to hear on the lounge room speakers. Of course, we could connect the device to the speakers, but that's inconvenient and sooo 20th century. Apple AirPlay allows media from iOS devices to be streamed wirelessly to a compatible receiver. I installed Shairport4w on the notebook, which enables it to be used as an AirPlay audio receiver.

This has already been useful in a way I didn't initially consider. Michael and Nicole were over for dinner and Michael wanted to play us an album he had on his iPhone. He was able to simply stream it using AirPlay without even getting up from the couch and his glass of red wine. Facilitating laziness is awesome. :)

Video Files

For video files, we use Media Player Classic - Home Cinema. We don't watch too many of these, so a proper library, etc. isn't important. However, we can't currently control it remotely, which is a minor annoyance. There are several ways we could do this such as the RemoteX Premium iOS app or a web server plugin, but requiring yet another app or web browser is ugly. I wish there were a way to control this using the iOS Remote app. :(

AirPrint

This isn't entertainment, but it hardly warranted a separate post. We own a Canon MP560 printer/scanner, which we're very happy with. It has built-in Wi-Fi, which is nice because it means the printer can live in a separate room and we can print from anywhere in the house. Unfortunately, it doesn't support Apple AirPrint, which means Jen, who primarily uses her iPad, can't print to it. To solve this, I set up the printer on the notebook, shared the printer and installed AirPrint for Windows. It works very nicely.

Tuesday, November 22, 2011

Frustrations with NVDA-support

Because NVDA is free software, we do not have the resources to provide free, direct technical support to users. Therefore, the NVDA-support email list was set up as "a place where users of the NVDA screen reader are able to ask questions about how to use NVDA".

A common complaint about mailing lists like this is that they produce a lot of messages. Some users cannot (or do not wish to) handle this high email traffic and therefore end up unsubscribing from the list fairly quickly, thus limiting its usefulness. Instead, users contact us directly or are driven away from the project. When directed to the tracker and email lists, one user who contacted me directly complained about having to "sign up to a thousand lists".

To combat this, we decided to selectively moderate the list, as full moderation is too time consuming. Users who broke (or bordered on breaking) list rules or otherwise had the potential to generate a lot of unnecessary traffic were moderated. Any post from those users that was irrelevant, unnecessary or might start such a thread was rejected.

Unfortunately, several users have been unhappy with or even outright offended by this. Today in particular, I rejected a post from a user (previously moderated for an off-topic post) which, while intended to be helpful, provided an incorrect (or at least very indirect) answer which I believed would cause more questions than it answered. No accusation was made, but this user took this very personally and made it clear that he would no longer support the project in any way.

Another common gripe is that users are often told to read the documentation when they ask questions. If it seems that a user hasn't even tried to read the documentation before asking a question, I do not think this is unwarranted. If they've at least tried and don't understand, this is a different matter entirely. If they don't wish to make the effort to at least try to understand the documentation, they should not expect free support.

It seems I can't win. I tried to do what I thought best for the NVDA community in limiting the traffic on the list so more users would be encouraged to use it. As a result, I'm accused of being unfair, draconian and ungrateful. Therefore, I've disabled all moderation on the list and I am withdrawing from the list myself for a while. I am done with support for now.

Tuesday, November 15, 2011

Inexcusable Inaccessibility: APC Goalball Coverage

Australia are currently hosting the IBSA Africa Oceania Goalball Regional Championships, where both men's and women's teams are playing to qualify for the London 2012 Paralympic Games. Go Australia! :) They've provided a live internet stream with commentary, which is fantastic. Unfortunately, the Flash video player they're using is completely inaccessible to screen reader users. (Technically, it uses windowless Flash, which is not accessible.) Worse, the page doesn't even play the video automatically when it opens, which means you have no choice but to use the video player controls. Allow me to emphasise the absolute, inexcusable absurdity of this situation: they are broadcasting a sport for the blind, but the broadcast is inaccessible to blind people.

Digging through their code, it's not too hard to work around this. I was able to come up with a link which enables auto-play, so at least it begins playing automatically, avoiding the need to use the video player controls. However, the average user would not have been able to do this themselves.

Ideally, everything should be accessible to all users. Sometimes, for whatever reasons (valid or not), this isn't possible. When it isn't, at least consider your target audience. If, for example, a large number of them are probably going to be blind, it might just make sense to implement and test accessibility for screen reader users. The APC are using an external service to provide the stream. Regardless, they should have tested and resolved the problem somehow or, at the very least, openly provided a work around such as the one I gave above.

It's worth noting that Adobe clearly document that windowless (transparent or opaque) Flash is inaccessible.

Tuesday, August 9, 2011

Making PennyTel Accessible with Greasemonkey

PennyTel is an incredibly cheap VoIP provider serving Australia (among other countries) which I have been using for several years. For the most part, I am fairly happy with them, especially the price. Unfortunately, their customer portal has many accessibility problems, and despite a polite request from me quite some time ago, nothing has been done to rectify this.

The biggest issue is that there are many buttons on the site which are presented using clickable graphics, but they have been marked with @alt="", indicating that the graphics are for visual presentation/layout only and suggesting to screen readers that they shouldn't be presented to the user. Obviously, this is very wrong, since these graphics are buttons which the user might wish to activate. It's bad enough that no text alternative is provided, but specifying empty text is extremely incorrect. With the current version of NVDA, this issue makes the portal practically unusable.

It recently occurred to me that it might be possible to hack around this with Greasemonkey. In short, Greasemonkey is a Firefox add-on which allows you to "customize the way a web page displays or behaves, by using small bits of JavaScript".

This turned out to be a great success. I now have a Greasemonkey script that not only gives friendly labels to many graphic buttons, but also injects ARIA to transform these graphics into buttons. In addition, there are parts of the portal which use graphics to indicate which option has been selected and the script turns these into radio buttons using ARIA. There is a navigation bar where the items are only clickable text, which the script changes into links for quicker navigation using ARIA. Finally, @alt="" is removed from all other clickable graphics which the script doesn't yet know about, which at least allows screen readers to present the graphic using their own algorithms to determine a label. Once the script is installed, this all happens transparently without any special action.

This took me a few hours, though this was mostly because I had never used Greasemonkey and only had a very basic knowledge of JavaScript before. Aside from the fact that PennyTel is now quite usable for me, this is also an exciting demonstration of how accessibility improvements for a site can be "scripted" within the browser like this, independent of any particular screen reader.

If you happen to have a PennyTel account and would find this useful yourself, you can grab the script. I'm sure there are more things I can improve, but this is sufficient to make the site quite usable.

Monday, July 25, 2011

Morning Limerick

There once was a guy named James Teh
Who disliked the start of the day.
He hated awaking,
Was bored by fast-breaking
And wished it would all go away.

Thursday, March 3, 2011

Responsibility for Windows Application Accessibility

When an assistive technology (AT) user discovers an application that is inaccessible in some way, they will generally hold one of two parties responsible: the AT developer or the application developer.

In the Apple world, the application developer is generally responsible for ensuring accessibility. Users don't tend to complain to Apple when an application is inaccessible; they complain to the application developer. More often than not, this is correct. An accessibility framework has been provided to facilitate application accessibility and Apple's assistive technologies utilise this framework, so it's up to the application to fulfil its part of the bargain.

In contrast, in the Windows world, the AT developer is generally held responsible. In the past, before there were standard, fully functional accessibility frameworks, I guess this was fair to some extent because application developers had no way of making their applications accessible out-of-the-box. As a result, AT developers worked around these problems themselves through application specific scripting and hacks. However, Windows has had standard rich accessibility frameworks such as IAccessible2 and UI Automation for several years now. Therefore, this is no longer an acceptable justification. Despite this, the general expectation still seems to be that AT developers are primarily responsible. For example, we constantly receive bug reports stating that a certain application does not work with NVDA.

Some might argue another reason for this situation is that application developers have previously been unable to test the accessibility of their applications because of the high cost of commercial ATs. With the availability of free ATs such as NVDA for several years now, this too is no longer an acceptable excuse.

So why is this still the case in the Windows world? If it's simply a ghost from the past, we need to move on. Maybe it's due to a desire for competitive advantage among AT vendors, but the mission of improving accessibility and serving users as well as possible should be more important. If it's resultant to poor or incomplete support for standard accessibility frameworks, ATs need to resolve this. Inadequate or missing support for accessibility in GUI toolkits is probably part of the problem. We need to work to fix this. Perhaps it's because of a lack of documentation and common knowledge. In that case, the accessibility/AT industry needs to work to rectify this. Maybe there just needs to be more advocacy about application accessibility. Are there other reasons? I'd appreciate your thoughts.

Whatever the reasons, I believe it's important that this changes. Proprietary solutions implemented for individual ATs are often suboptimal. Even if this wasn't the case, implementing such solutions in multiple ATs seems redundant and wasteful. Finally, the more applications that are accessible using standard mechanisms, the more users will benefit.

Tuesday, February 22, 2011

My quest for a New Mobile Phone

Disclaimer: This is primarily based on my own personal experience. Also, this all happened last year, so some of the finer details are a bit vague now.

Early last year, my trusty 6 year old Nokia 6600 was finally starting to die and I decided it was past time to move on. (Amusingly, that phone even survived being accidentally dunked in a glass of wine.) My aim was to satisfy all of my portable technology needs with one device, including phone, email, web browser, audio player (standard 3.5 mm audio socket essential), synchronisable calendar, synchronisable contact manager, note taker, ebook reader and portable file storage. And so the quest began.

Making the (First) Choice


My ideal mobile platform was Android. Aside from satisfying all of my needs, it is an open, modern platform. Unfortunately, although I seriously entertained the idea, a great deal of research and playing with the Android emulator led me to realise that Android's accessibility was at an unacceptably poor state for me.

I considered the iPhone. I've written about the iPhone's out-of-the-box accessibility to blind people before and have played with an iPhone several times since. Although I was pretty impressed, especially by the fact that VoiceOver comes out-of-the-box, there were two major problems with the iPhone for me. First, I dislike the closed nature of the iPhone environment, commonly known as the "walled garden" or "do things Apple's way or not at all". Aside from the principle (you all know I'm a big advocate for openness), I would be unable to play ogg vorbis (my audio format of choice) in iPod, I would have to transfer music and files with iTunes (which I detest), I couldn't use it as a portable file storage device, and I would be limited to apps that Apple permitted (unless I wanted to jailbreak). Second, I wanted a device with a physical keyboard. In the end, I decided against the iPhone.

I briefly considered another Symbian Series 60 phone. However, based on past experience (both mine and others'), I didn't think I would be able to play audio and use the screen reader simultaneously, which immediately disqualified it for me, although I've since been informed that is no longer true on some newer phones. I also feel it is a dying platform. There are probably some other reasons i discounted it, but i can't remember them now. If nothing else, I wasn't entirely happy with it and wanted a change.

Finally, I settled on Windows Mobile, specifically the Sony Ericsson Xperia X1, with Mobile Speak. I guess Windows Mobile is a dying platform too, but at the time, I felt it was perhaps less so and provided more functionality for me. While the operating system itself isn't open, you can develop and install apps as you please without being restricted to a single app store. There are several ogg vorbis players for Windows Mobile. I also had the option of buying Mobile Geo for navigation if I wanted to later. I was warned by someone who played with an older phone that Mobile Speak on Windows Mobile was fairly unresponsive, but unable to test this myself, I hoped that it might be better with a less resource intensive voice such as Fonix and/or a newer phone or that I'd get used to it.

Frustration, Pain and Misery


A bit less than $800 later, I got my new phone and Mobile Speak in June. I expected and accepted that it would take me some time to get used to it. I loved finally being able to access the internet from my phone and play audio through proper stereo headphones. However, despite this, the next few months were just downright painful and frequently induced rage bordering on violence. I often had the urge to throw my new phone across the room several times a day.

Primarily, this was due to Mobile Speak. I found it to be hideously unresponsive, unstable, unreliable, inconsistent and otherwise buggy as all hell.
  • The unresponsiveness proved to be unacceptable for me, often taking around half a second to respond to input and events, even using Fonix. Aside from the general inefficiency this caused, this made reading text incredibly tedious, and despite the physical keyboard, typing was painful due to the slow response to backspace and cursor keys.
  • Mobile Speak crashed or froze far too often and there was no way to resurrect it without restarting the phone.
  • In Internet Explorer, working with form controls was extremely inconsistent and unreliable, especially multi-line editable text fields. Quick navigation (moving by heading, etc.) was very slow and failed to work altogether in many cases. On my phone, Google services (including Google Search, even the mobile version, of all things!) refused to render at all.
  • I encountered problems when reading email as well. Sometimes, Mobile Speak wouldn't render emails. Others, it wouldn't let me navigate to the headers of the email, which is essential if you want to download the rest of a message that hasn't been fully downloaded.
  • Reading text in Word Mobile was even slower than everywhere else, which made reading ebooks infeasible.
  • Braille display scrolling is either broken or unintuitive. On my 40 cell display, Mobile Speak only seemed to scroll half the display and I couldn't find a way to change this.
  • Definitely quite a few other bugs I can't remember all of the details about...
It's worth noting that I'm not saying that this is all entirely Mobile Speak's fault. I suspect Windows Mobile and other applications may play a part in this dodginess.

I had two other major gripes with Mobile Speak.
  • Despite years of experience with screen readers, I found the Mobile Speak commands, especially the touch interface, to be tedious and difficult to learn. The touch interface is inherently slow to use because you need to wait after certain taps to avoid them being construed as double or triple taps.
  • Mobile Speak's phone number licensing model was a major annoyance for me when I went overseas for a few days. Mobile Speak allows you to use it with a SIM card with a different number for 12 hours, but you have to reinsert the original SIM card after 12 hours if you want Mobile Speak to continue functioning as a licensed copy. Also, I seem to recall that this also applied if the phone was in airplane mode.


There were other things that irritated me about my phone and its applications.
  • I found Windows Mobile in general to be very sluggish. Even something as fundamental to a phone as dialling on the phone keypad or adjusting the volume was incredibly laggy, sometimes taking several seconds to respond to key presses.
  • Far too many apps, including Google Maps and both of the free ogg vorbis players I tried, had significant accessibility problems.
  • Windows Media doesn't have support for bookmarking, which made reading audio books infeasible. There was a paid app that provided this and other functionality i wanted, but i wasn't willing to pay for it in case I discovered it too had major accessibility problems.
  • Windows Mobile doesn't have enough levels on its volume control.
  • If the phone is switched to silent, all audio is silenced, including Mobile Speak.
  • Internet Explorer doesn't support tabbed browsing!
  • Windows Mobile only supports one Exchange Active Sync account, which meant I couldn't maintain separate personal and work calendars.
  • More...


The Snapping Point


In the end, after less than 6 months, I just couldn't take it any more. I tried to learn to live with it for at least a couple of years, as I'd already spent so much money on it, but it was truly unbearable. It's worth noting that a close friend of mine had a very similar experience with Windows Mobile and also gave up in equivalent disgust around the same time. It particularly angers me that I paid $315 for a piece of software as buggy as Mobile Speak. I started playing with Jen's iPhone a bit more, and finally, I gave in and got my own.

The iPhone: Peace at Last


For reasons I mentioned above, I felt like I was going to the dark side when I made the decision to switch to the iPhone. Among other things, it's a bit hypocritical of me, given my belief in and advocacy for openness. Nevertheless, I have not looked back once since I got it. It has truly changed my life.

The in-built accessibility of the iPhone is amazing. I strongly believe that accessibility should not incur an extra cost for the user and Apple have done just that. VoiceOver is very responsive. Usage is fairly intuitive. All of the in-built apps and the majority of third party apps are accessible. Once you get used to it and start to remember where things are on the screen, navigating with the touch screen becomes incredibly efficient. The support for braille displays is excellent; I can see this being very useful next time I need to give a presentation. The triple-click home feature means that I can even toggle VoiceOver on Jen's phone when needed, which has been really useful for us when she is driving and needs me to read directions.

I still hate iTunes, but thankfully, I rarely have to use it. I manage music and other audio on my phone using the iPod manager component for my audio player, foobar2000, which is excellent and even transcodes files in unsupported formats on the fly. The iPod app is great, supporting gapless playback for music and automatic bookmarks for audio books and podcasts.

Other highlights:
  • Very nice email and web browsing experience.
  • Push notifications for mail, Twitter, Facebook and instant messaging.
  • Multiple calendars.
  • Skype on my phone, which is far nicer than being tied to my computer when on a Skype call.
  • Voice control, which works well most of the time.
  • Smooth reading of ebooks using iBooks.


Resultant to all of this, I find I spend far less time in front of my computer outside of work hours. Also, when I'm away from home, even on holiday for a week, I often just take my phone. Previously, I had to take my notebook almost everywhere.

Like all things, the iPhone isn't perfect. I still dislike the walled garden, and I have to live with transcoded audio and can't use my iPhone as a USB storage device because of it. I am definitely slower at typing on the iPhone than I was on the numeric keypad on my old Nokia 6600, although perhaps surprisingly, I'm probably faster on the iPhone than I was on the Xperia X1. There are definitely bugs, some of which I encounter on a daily basis and are very annoying.

Even so, I love the iPhone. I'm willing to make some sacrifices, and I can live with bugs that are at least consistent and easy enough to work around. On the rare occasions that VoiceOver crashes, I can easily restart it with a triple click of the Home button. I wish I'd gone for the iPhone in the first place and not wasted so much money on the Windows Mobile solution, but ah well, live and learn.

Tuesday, February 15, 2011

Brilliant Email

The following email sent to NV Access administration yesterday is an amazing mastery of politeness, eloquence, intellect and linguistic ability. I've reproduced it verbatim below, except for the obfuscation of some words for reasons that will become clear as you read. Enjoy!
From: Dave. I lost my cookie at the disco. <computerguy125@****>
To: admin@****
Date: Sun, 13 Feb 2011 23:40:16 -0500
Subject:

hEY MOTHER F****R YOU SCREWED UP MY LAPTOP.  fIX YOUR SCREEN READER.  bLIND ASS MOTHER F****R.  f***ING BLINKY.  cHANGE YOUR SHORTCUT KEY SO IT DOESN'T CONFLICT WITH SYSTEM ACCESS TOO.  yOU DON'T KNOW THAT THEN LOOK IT UP.  mOTEHR F****R.

-- 
Email services provided by the System Access Mobile Network.  Visit www.serotek.com to learn more about accessibility anywhere.

Thursday, October 28, 2010

The Poor State of Android Accessibility

The Android mobile platform really excites me. It is open (which cannot be said of the iPhone) and is incredibly successful in many respects. I would almost certainly choose an Android phone... except for the poor state of Android accessibility.

Note: I will primarily discuss access for blind users here, since that is what I am most familiar with. However, some of this applies to other disabilities as well.

In the Beginning


In the beginning, there was no accessibility whatsoever in Android. It would have made sense to design it from the start with accessibility in mind, which would have made it much easier, but as is sadly so often the case, this wasn't done. Nevertheless, many other platforms have managed to recover from this oversight, some with great success.

Eyes-Free Project


Then came the Eyes-Free Project, which created a suite of self-voicing applications to enable blind users to use many functions of the phone. Requiring blind users to use these special applications limits the functionality they can access and completely isolates them from the experience of other users. This is just a small step away from a device designed only for blind users. I guess this is better than nothing, but in the long-term, this is unacceptable.

Integrated Accessibility API and Services


With the release of Android 1.6 came an accessibility API integrated into the core of Android, as well as a screen reader (Talkback) and other accessibility services. A developer outside Google also began working on a screen reader called Spiel. This meant that blind users could now access standard Android applications just like everyone else.

Unfortunately, the Android accessibility API is severely limited. All it can do is send events when something notable happens in the user interface. An accessibility service such as a screen reader can query these events for specific information (such as the text of an object which has been activated), but no other interaction or queries are possible. This means it isn't possible to retrieve information about other objects on the screen unless they are activated, which makes screen review impossible among other things. Even the now very dated Microsoft Active Accessibility (the core accessibility API used in Windows), with its many limitations and flaws, allows you to explore, query and interact with objects.

Inability to Globally Intercept Input


In addition, it is not possible for an accessibility service to globally intercept presses on the keyboard or touch screen. Not only does this mean that an accessibility service cannot provide keyboard/touch screen commands for screen review, silencing speech, changing settings, etc., but it also makes touch screen accessibility for blind users impossible. A blind user needs to be able to explore the touch screen without unintentionally activating controls, which can't be done unless the screen reader can provide special handling of the touch screen.

Inaccessible Web Rendering Engine


The web rendering engine used in Android is inaccessible. In fact, it's probably impossible to make it accessible at present due to Android's severely limited accessibility framework, as a user needs to be able to explore all objects on a web page. This means that the in-built web browser, email client and most other applications that display web content are inaccessible. This is totally unacceptable for a modern smart phone.

IDEAL Apps4Android's Accessible Email Client and Web Browser


IDEAL Apps4Android released both an accessible email client and web browser. The accessibility enhancements to the K9 email client (on which their application is based) have since been incorporated into K9 itself, which is fantastic. However, access to the web still requires a separate "accessible" web browser. While other developers can also integrate this web accessibility support into their applications, it is essentially a set of self-voicing scripts which need to be embedded in the application. This is rather inelegant and is very much "bolt-on accessibility" instead of accessibility being integrated into the web rendering engine itself. This isn't to criticise IDEAL: they did the best they could given the limitations of the Android accessibility API and should be commended. Nevertheless, it is an unsatisfactory situation.

More "Accessible" Apps


There are quite a few other applications aside from those mentioned above that have been designed specifically as "accessible" applications, again isolating disabled users from the normal applications used by everyone else. Again, this isolating redundancy is largely due to Android's severely limited accessibility framework.

Solution


Unfortunately, even though Android is open source, solving this problem is rather difficult for people outside the core Android development team because it will require changes to the core of Android. The current accessibility framework needs to be significantly enhanced or perhaps even redesigned, and core applications need to take advantage of this improved framework.

Conclusion


While significant headway has been made concerning accessibility in Android 1.6 and beyond, the situation is far from satisfactory. Android is usable by blind users now, but it is certainly not optimal or straightforward. In addition, the implementation is poorly designed and inelegant. This situation is only going to get messier until this problem is solved.

I find it extremely frustrating that Android accessibility is in such a poor state. It seems that Google learnt nothing from the accessibility lessons of the past. This mess could have been avoided if the accessibility framework had been carefully designed, rather than the half-done job we have now. Good, thorough design is one of the reasons that iPhone accessibility is so brilliant and "just works".

Friday, September 3, 2010

Why Can't Microsoft Build a Screen Reader into Windows?

As Windows screen reader users will know, there is no screen reader included in Windows. Instead, users requiring a screen reader must obtain and install a third party product. Yes, there is Microsoft Narrator, but even Microsoft know that this is hardly worthy of the name "screen reader". :)

A few years ago, Apple revolutionised the accessibility industry by building a fully fledged screen reader, VoiceOver, right into Mac OS X. Ever since, many have asked why Microsoft can't do the same for Windows. Many are angry with Microsoft for this continued lack of built-in accessibility, some using it as support for the "why Apple is better than Microsoft" argument.

Here's some food for thought. I'm not sure Microsoft could do this even if they wanted to; their hands are probably tied in a legal sense. If they did, they could very likely be sued by assistive technology vendors for anti-competitive conduct, just as they have been sued several times concerning their bundling of Internet Explorer with Windows. Once again, Apple don't have to be concerned with this because there wasn't an existing screen reader on Mac OS X and they don't have the dominant position in the market.

I have no evidence for this argument. Perhaps I'm wrong, but history suggests that it is highly likely that I'm not.

Even as one of the lead developers of NVDA, I'm first and foremost a blind user who wants the best possible access, both for myself and other blind users. As such, I would very much welcome a screen reader built into Windows. Competition is good. A built-in screen reader doesn't mean that other screen readers can't exist. If the built-in solution were good enough, then there would be no need for NVDA to exist. If it weren't, NVDA would drive accessibility to improve through innovation and competition.

Sunday, January 31, 2010

Using Touch Sensitive Buttons on Modern Notebooks without Sight

Many (perhaps the majority of) notebook/laptop computers now have a strip of touch sensitive keys above the normal keyboard. These keys are not tactile in any way and, depending on the computer, provide such things as multimedia controls (volume, mute, play, stop, etc.), toggles for wireless radios and other special functions. I've always been concerned about how I would access such controls without any sight. Unfortunately, in my search for a new mnotebook, all of the notebooks which interested me included them, so I decided to just live with it. I bought an Acer Aspire 3935 which, among other things, has touch sensitive keys to toggle bluetooth and wifi. I began to ponder ways to place some sort of tactile marker on or near these keys so I can find them. However, it recently occurred to me that there's already a perfectly good tactile locator for these keys: the function keys on the normal keyboard, which lie immediately beneath the touch sensitive strip. For example, on my computer, the key for toggling bluetooth is just above the f11 key, so all I have to do is locate the f11 key and move my finger directly up to hit the bluetooth toggle key. This is blindingly obvious in hindsight, but well... hindsight is a wonderful thing. :) Of course, sighted assistance may be required initially to find the keys if trial and error is insufficient.

Monday, January 25, 2010

Wedding Reflections

On Saturday, 12 December 2009, Jen and I got married. :) The wedding was perfect; I could not have asked for anything more. First, my groomsmen were fantastic and I had a great time getting ready with them. The ceremony was absolutely beautiful; there were a lot of happy tears in the chapel. Although I knew all of the music and had spent time arranging and rehearsing it, this was the time for me to just open my mind and heart, to exist entirely in the moment, to truly listen to and feel its meaning. Every word of the ceremony - the celebrant's sections, the music, the reading and the vows - was sincere and deeply significant to us. The reception was very enjoyable and memorable, especially the speeches, all of which were terrific and touching.

Throughout the wedding, I was quite proud and happy to be the centre of attention alongside Jen. :) I was truly humbled and awed by the love and respect for us that everyone - our family and friends - showed. And of course, the whole point of all of this was that Jen and I were publicly declaring our love for each other and intent to be together for the rest of our lives.

The wedding also helped me on a personal level in ways I had not expected. I am a self-critical perfectionist by nature, sometimes to an almost self-destructive extent. I waste so much time regretting, wishing I could do things better and worrying about both the past and future that I almost miss out on the present. Nothing I do is ever good enough for me. However, the wedding was a transcendental experience and helped me to see beyond this. I have made many mistakes, I've downright failed sometimes, but I realised that my path, with all of its ups and downs, had led me to this moment and I wouldn't change it for the world. If I'd found such happiness and love and earnt the respect of so many, especially Jen :), I must have done something right overall. It has left me with a lasting, wonderous sense of clarity, relief, peace, contentment and confidence. I have a great deal to live and learn, but I am who and where I want to be. Now I just have to try to take this state of mind into the new year and beyond. :)

Thursday, January 14, 2010

Honeymoon, Part 9 (unfinished)

Note: This post has been floating around on my computer for years, but I never did finish it. I've decided to just post it in its incomplete form; something is better than nothing. :)

Jamie: The last destination of our honeymoon was Kuala Lumpur, Malaysia. Aside from breaking up our long flight back to Australia, we wanted to catch up with some of my relatives: my Uncle Jerry, Auntie Janet and Dad's cousin S.Y. Upon arriving in KL, we were picked up by my uncle and aunt, who (very generously) drove us to our hotel, also giving us a brief tour of KL on the way. KL is a city that never seems to sleep. It is incredibly well lit at night. Most shops don't close until 10pm, some even later. The city is busy with people on the streets and in shops even late on a weekday evening.

The 5 star Traders Hotel where we stayed was absolutely fantastic and incredibly well priced. Located in the heart of the new city centre, it was walking distance from everything we wanted to visit.

The first thing we did after emerging from our room on Tuesday morning was to find some brunch. Both Dad and Uncle Jerry recommended that we visit the food court in the Pavilion, which is a huge shopping centre. And wow, what a food court it was, sprawling across most of one level of the complex. Food in Malaysia is so damned cheap and so delicious. Among other things, we had Malaysian satay, which is just incredible and miles above the satay one gets in Australia.

Later in the day, we visited KL's acquarium, located in the KL Convention Centre. Although much of the display was obviously visual and informational, but there were also three touch pools. I was able to touch a little shark, a stingray, a horse-shoe crab and a sea cucumber, all of which were fascinating. The most bizarre was definitely the last, which was just... squishy. You can literally squish it in your hand; it's a little creepy.

On Wednesday, we met my aunt and uncle again, along with S.Y. I've never met S.Y. before, despite having heard a lot about her over the years, so it was nice to finally meet her. After spending a very nice hour or so chatting at the hotel, during which we had to call hotel staff to rescue Auntie Janet from the bathroom due to a broken lock :), we went to lunch at a restaurant specialising in Penang food and ate a hell of a lot of it. We were introduced to two delicious side dishes and desserts which I've never had or seen before in Australia, all of which I will miss.

On a random impulse, Uncle Jerry, Jen and I decided to visit the music science section of KL's science centre. ...

As I write this, I'm on the plane back to Australia. The honeymoon is over. I'm a little sad, but also glad to be coming home and looking forward to seeing everyone again. It has been a fantastic and memorable trip. We really have had the time of our lives.

Honeymoon, Part 8 (unfinished)

Note: This post has been floating around on my computer for years, but I never did finish it. I've decided to just post it in its incomplete form; something is better than nothing. :)

Jamie: Our journey from Florence to Cork in Ireland was extremely long, boring and tedious. We left our hotel in Florence at around 6am and didn't arrive at our hotel in Cork until nearly 10pm. Our train from Florence to Rome was running late, so we were worried we'd miss our plane to Dublin, but thankfully, we made it in time. Due to icy roads, we were told it was going to be extremely difficult to get a cab to the hotel, but thankfully, we were okay there, too. These are yet another couple of proverbial travel related bullets we've dodged this trip. We also found out that the weather in Cork was uncharacteristically cold. Soon after, we started hearing endless talk about the "big freeze" on the news. :)

Our hotel in Cork, The Ambassador, was a very nice hotel indeed, probably the best of the trip so far. This was quite fortuitous, as we spent a considerable amount of time there, partly because we wanted to lie around and relax a bit and also because of the icy roads and extreme cold at night. It resides on a hill, which afforded Jen a spectacular view of the city from our room.

We both very much enjoyed the atmosphere of Cork. Everyone we met was extremely warm, friendly and helpful. Of course, there was an abundance of Irish pubs. On our first day, we stopped at a pub for a cup of hot port and one of hot whisky. Yum! (I love warm alcoholic beverages.)

On our last day there, we took a bus to Blarney, a very small town about 15 minutes drive from Cork. Its prominent feature is Blarney Castle. ...

Wednesday, January 13, 2010

Honeymoon, Part 7.


Jamie: We had a pretty quiet new year's day, having slept rather badly due to the insane, spectacular, long-lasting fireworks and other celebration the previous night.

The view out of our window just before midnight. Everyone was moving toward Piazza del Popolo to watch the fireworks - but we had a great view from the hotel.

Before leaving Rome, we visited the Spanish Steps and the Trevi Fountain. We didn't walk all the way up the Spanish Steps due to major crowding, but we did walk about half way up. I was fascinated to see such a long, wide set of steps. There are no turns; it just goes straight up with landings in between. It was difficult for me to get a true sense of the Trevi Fountain, as much of it was out of my reach, but its width, its length and the volume of water cascading therein was pretty spectacular.


The beautiful Trevi Fountain.

I've been researching the various places before or soon after we visited them. Associating the history with the real thing is quite fascinating.

We arrived in Florence on Saturday afternoon. Over a half bottle of wine in the hotel bar, we saw a brochure for a tour company called FunInTuscany which does wine tours and Tuscan cooking classes, among other things. We'd wanted to do something like this while in Europe, but hadn't had any luck booking such before we left. Tours were either hideously over-priced or unavailable at this time of year, so we'd pretty much given up on being able to do it. We called FunInTuscany and were delighted when we discovered that we were able to book a combined wine tour and cooking class for the next day.

It turned out to be probably the best day of our honeymoon. After a rocky start (we couldn't find the meeting point and were worried we'd miss the tour), we found the tour van, were introduced to our guide and began our journey. Aside from us, there were only two others on the tour.


In the van on the way to San Gimignano. We were excited!

The first leg of the journey was about a 40 minute drive. Our first stop was San Gimignano, a small, walled medieval town.


San Gimignano from a distance.

Its 13th century medieval architecture is incredibly well preserved. I was able to get a good sense of this; it just "feels" very old, with its worn, solid stone walls, cobbled streets and frequent narrow alleyways. We spent about 45 minutes there, during which we became more familiar with our guide, who was extremely warm and friendly.

After leaving the town, we drove to the country villa where we were to have our cooking class. This is where the real fun began. We were introduced to our very friendly chef/instructor (and his mother, who also helped out) and a few minutes later, our lesson began.


Jamie, the chef.


The ingredients.

First, we were taught how to make pici pasta from scratch. This was interesting, very much hands-on and much simpler than I had expected once you have the basic idea. I knew that fresh pasta was basically dough, but somehow, i hadn't imagined that making and manipulating it would be just like any other dough. Also, I've never had or heard of pici pasta before.

Rolling the pasta.


Our home-made pasta.


After this, we made two pasta sauces, one tomato-based and one cheese-based.


Making the sauces.

Our master-chef teacher, Fuglio.

His mama.

We learnt that Tuscans use a hell of a lot of extra virgin olive oil in their cooking. :) We used cayenne pepper in both of the sauces, which is something we've never used ourselves before and discovered that we quite like. Subsequently, we made a salad which, among other things, includes crumbled three day old bread soaked in water!


We then made two kinds of bruschetta, as well as preparing three kinds of cheese with various accompaniments. One of the cheeses was covered in honey, sultanas, pine nuts and freshly ground nutmeg. Yum! Finally, we observed the preparation of chicken which would later be cooked in a sauce primarily consisting of orange juice.

The gorgeous table setting.

After quite a bit of socialising and a glass of red wine while we waited for our guide and his friend to return, we proceeded to eat. The food was delicious. In particular, the salad was divine; I've never had anything like it before. I also very much enjoyed the cheeses, particularly the one accompanied with honey, sultanas, etc.; I do like cheese, but especially like it with nice accompaniments. Each course was accompanied by a different wine. It was a long, lingering, social lunch - the best kind! Overall, I was thoroughly impressed by the fruits of our labour, though of course we had our instructors observing and making corrections as we worked. Whether we can replicate it by ourselves remains to be seen. :)


Garlic and oil bruschetta. Mmm, garlic and oil...

Tomato bruschetta and three kinds of cheese.

Our wet bread salad.

Jen: So that we didn't have to remember all the recipes, our lovely chef made us a cookbook to take home.

Following lunch, our guide spontaneously took us up onto a big hill on the property to have a glass of wine. The view was spectacular, and it was such a perfect, clear day. (Jamie: There's nothing quite like fresh, crisp air in the middle of the peaceful, quiet countryside.)

We then went to a local winery for a little tour and some tasting. The white wine there was spectacular, so we bought a bottle to drink the next day - pretty much the only white we've had over here. (Jamie: It, along with most of the other wines, only cost 5 Euro. 5 Euro! So cheap! I wish we could have brought some home with us.) We returned back to our hotel in high spirits, and received some exciting news the next day - Jamie's sister Ro was in labour! The beautiful Siena Rose Scott was born on 5th January at about 2.15am AEST, weighing in at 7 lbs 7 oz.