In search of the right technology - part 1
Those many moons ago when the Quakr concept came to light in the local, there was always an unuttered question "How do we do it?". This is the first of a series exploring the current state of the technology that might just be able to build the envisaged Quakr application.
I'm gonna start by assuming that you've already grasped the beauty of what it is that we are all about, and so I can skip the sales pitch. Instead I'm gonna tickle the imagination in you by describing our αlphα attempts at the Quakr application.
So you've already got a digital camera. You've already got an account at Flickr. You've already taken photo's of your local, geo-tagged them and then stuck them into Flickr, Google or Microsoft earth. That's all well and good, but how do you go about enhancing them into a Quakr'able set. Well it's all a matter of Tags. You need to add tags to each photo for the various additional metadata required in order for the Quakr Viewr to know where and how to put them.
So we'll imagine that you've done all that and it's time to look at your images. You want an intuitive interface like Google Earth with a map on the virtual floor and a set of relevant images floating in space. We want that too. Initial investigations jumped on technology from 1997 - Virtual Reality Media Language (VRML - or WRL). I've played in that space before [ps it doesn't work cause it was built when VRML was at release 0.9... sorry!), and attempted to convince myself that it was still possible. VRML was probably one of the coolest things I ever played with in 1995. Unfortunately, the spec has not moved with the times, and here we are 10 years later and it's only now being picked up again and discussed as a viable technology. There are a few plugins from that time that still work today and after some heartache, email correspondence with Tech Support and a decision or two, we had a working alpha release of our app - Quakr Viewr.
The plugin handles all the drawing and re-drawing of the objects in space, provides a reasonable interface to allow moving around and pre-defining camera positions. "All" we had to do was build an input file of *pictures in space* and do some funny things with proximity sensors in order that the right images go away when you are looking at the back of them. But can we interact with the thing real time in order to add and remove pictures as they become relevant/irrelevant? Can we communicate in any way from the plugins view of the world back to the page with the plugin on? Hmmm. Seems following a fair amount of investigation that the answer to these two proved most elusive and so we were forced to think again. That's another story. For now, please have a browse of the αlphα and feel free to comment as you like.
No comments:
Post a Comment