6 Feb 2007

newphotos - the old way

Following from our field trip with the 7D-Tiltometer, I took the liberty of taking a downsized version of each image and dummying up an XML file for them. With some very minor changes to the previously released beta αlphα, I now give you...
Quakr's New Photos rendered as ever in the Quakr Viewr1.

The updated XML file is here if you're interested. We are currently working on making the Quakr Viewr do it's thing in an interactive way, and this dummy version will provide us a reference point to see how well the active data looks compared to some hand coded stuff. Initial feedback on that release has been garnered and the majority of the items are to do with the default keyboard arrow usage. Some people want the left and right unshifted keys to turn the camera (as it does in this release). Other people want it to move the camera - like the shift left and right keys do. Nobody liked the first release which tilted the camera like the control left and right keys currently do. Is it subjective, or does anyone feel the same? Is this different feedback deserving of a *make your own keys* function? Watch this space for answers!

1 - the viewr is now in αlphα 2 release...

2 comments:

Anonymous said...

I don't think you are going to get reasonable accuracy with your "tiltometer". Why not start with lat/lon and refine it using keypoint detection across image sets?

The lat/lon stuff is the only thing missing from (http://phototour.cs.washington.edu/) that one would need to make a world 3d model.

--t

Unknown said...

Hey t.

Thanks for the comment. We've seen the Photo Tour product before, but thanks for the link. On your first point about accuracy - we basically agree - we are talking about a separate application which will allow a user to refine the accuracy originally encoded by a "tiltometer". We're calling it "Taggr" for now, but there'll be more on that in the coming weeks and months.

Meanwhile, it's worth stating how what we're doing is different from the things that both Microsoft (in this case) and Google (with the Google Earth product) are attempting. Our product is light weight and designed to work with any of the existing photo and map repositories/services. We're NOT into hosting photo's or map information and have no intention of doing anything proprietary. We're hoping that we can piggy-back on what users of Flickr (and similar) are already doing. In one line... "To make your photos appear in our Viewr, simply add a tag or two to your already uploaded images". We don't need you to register with us, give us half your internet bandwidth, sign up for a limited space on a disc or other of ours, download an application which requires a Pentium 7 processor, upload photos so we can edge detect and process them into a different pile, etc.

Put simply, we aren't interested in applying heavy duty centralised processing to abstract some 3d-ness from the photographs - instead our 3d world is built by virtue of simply rendering the photographs in the position in which they were taken. We envisage that cameras (in the not too distant future) will start to store sufficient metadata to allow this kind of rendering without any need for manual tagging. We think the photos speak for themselves, so "all" we have to do is display them in the right place and the user will make a model in their heads. Actually attempting this model making is a step too far we think, but may be of interest in some currently unplanned future.

I hope this helps clarify our mission, but feel free to question us further...