This is the week that people who have signed up to try out Google Glass (and parted with a lot of money for the privilege) get their hands on the devices and Google begins to learn what the world might make of them. While Netskills isn’t going to be getting one any time soon I’ve been curious to find out as much as I can about what it might mean for education.
The Glass tech specs make interesting reading.
If you’re struggling to think what the Glass experience is actually like then it’s worth watching the video below of a Google presentation to developers at SXSW this year. It’s quite lengthy but there are some quite nice live demos of some of the basic functions of Glass along with some discussion about what sorts of experiences will be best suited to the device.
To help you jump to the right bits…
11:00 – Live demo
16:00 – What the “Mirror” API looks like
26:30 – Guidelines for developing apps
31:19 – Examples of early apps; New York Times (31:50), Gmail (35:50), Evernote/Skitch (39:35), Path (44:40)
I’m particularly intrigued by the timeline-based “card” interface and how they use a combination of gesture and voice to control the device. The guidelines developers are being encouraged to follow to keep the Glass experience accessible but unobtrusive are also worth a look.
There has been some debate this week about whether the developers that were listening at SXSW will actually be able to monetise any Glass apps as Google is thankfully banning ads – but also forbidding charging for apps.
What might it mean for education?
Let’s not fall into the trap of predicting that Glass will transform education just yet.
Essentially, Glass is just a just a small computer screen with an HD camera attached all hooked up to the web. The unique affordances of it come from the fact that it’s wearable, puts information in the line of sight and the camera give a first person view of the world. Also, the small fact that the user doesn’t have to hold a device in front of their face to record images or video introduces added benefits and complexities.
It’s still very early to look critically at how people are actually using Glass. At the moment all we’re getting is some rather long, rambling videos of people looking at screens or, puzzlingly, unboxing videos shot with the device that someone just unboxed.
But it’s easy to see that a wearable camera will be great for capturing learning in action, especially with practical activities. The fact that a user can watch a video through Glass also means a demo of an activity produced by a teacher (software tutorial, labwork, fieldwork etc) could be accessible in situ and without having to fumble with hand held or desktop devices.
Personal lecture capture could be easier although you can imagine storage issues arising from several hundred students all recording the same 1 hour lecture and saving to their institutional My Documents folder time and time again. You can foresee some institutions taking action on this.
Augmented reality apps might finally find a decent home on devices like Glass. I’ve always disliked the UX of having to hold my phone or tablet up to see virtual points of interest overlaid onto a video feed in apps like Layar or Aurasma, especially as a rendering of spatial information (why not just use maps?). I’d love to see Glass combining visual search tools and geolocation so that objects can be identified and extra layers of information like text, links, media offered to the wearer. That may be some way off, though.
And what if you are a lecturer, or speaking at a conference. Personally, I’d love to have brief speaker notes displayed in line of sight.
And campus orientation for new students?
But that’s by no means an end to the possibilities. Lots of things are going to emerge over the next 12 months.
Some “red flags”
First and foremost, Glass is expensive, coming in at around the £1k mark at the moment. That may not last but it’s hard to see large numbers of students turning up at college or uni in September sporting Glass.
The developer guidance in the SXSW video is a good pointer to how institutions thinking about developing apps for Glass might think about designing for the interface. It’s not just a case of designing web pages that are very small. There are issues around visual presentation but also about when information is presented, the volume and type of notifications and so on.
There’s obvious safeguarding, privacy and IPR issues connected with having cameras that can be worn and activated discretely. We’re already seeing people pre-emptively saying that Glass will not be welcome in certain social and semi-private spaces and it’s foreseeable that educational institutions will also take a view on this. If you think mobiles in school classrooms was a controversial issue…
There will be an impact on assessment methods if learners have access to devices that can discretely deliver information to them visually or through the bone-conduction audio system.
One would hope that we don’t get a re-run of the early days of concern over mobile devices for learning and that we’ve learnt a bit from going through that process.
In conclusion
This is all very speculative of course but I’m just finding it interesting taking notice of the emergence of a new form of technology as I really wasn’t paying attention properly when Apple released the iPhone.
Along with the other forms of mobile computing, the possible arrival of other wearable tech (iWatch?) and the ever-around-the-corner internet of things, Glass marks another step to a much more diverse tech landscape.
One last thing
I had to giggle at the Indie’s subtly dismissive description of Glass as “voice activated web goggles“. It made me think that Google had missed a trick by not calling them Google Woggles. 😉
And here’s Tom Scott’s take on a world with Glass…
Photo Credit: Max Braun via Compfight cc