Ronnie05's Blog

Augmented reality glasses by Atheer: Promises and Possibilities

Posted in Applications and User Interfaces, Value added services and applications by Manas Ganguly on December 6, 2013

Yet another augmented reality video by Google Glass competitor Atheer demonstrating possibilities and promise of Augmented reality as a service.

Death of the Smartphones

Posted in Applications and User Interfaces, Device Platforms by Manas Ganguly on December 3, 2012

Smart-phones are ubiquitous devices globally and it feels funny to write an obituary to such a mega-hype. But as with most and many technologies, there is a lifecycle – and the alternative will come through sooner than later. Disruption is inevitable.

The end of smartphones!

The end of smartphones!

Computing as a consumer activity and behavior is evolving and the interfaces are getting as close to human social behavior, thought processing and sensory valuation as it can. The devices of tomorrow will be a coherent mix of Immersive Internet Media, Inclusive “user centered ”Computing, Portability to the point of ubiquity, based on Sensory Platforms and Always Real time. All this builds up-to a future in devices where the devices will be controlled by Mind and be an extension of the human self.

Coming back to the replacement of Smartphones, the recent developments in Augmented reality, Speech Recognition methodologies and Gesture based controls combined together could spell the next platform in computing – a wearable one at that – the prototype of which is Google Glass. Microsoft also has a product in the pipeline currently code named as Photosynth. As a gadget, the concepts are very interesting and far reaching in terms of harbingers what the next wave in technology –called wearable technology. These devices will overlay data with predictive analytics, Social Media and Digital Illustrations and other commercials application such as payments etc. Predictive analytics, Speech Recognition, Gesture Control, In memory analytics, social analytics and Augmented Reality are already at various stages of maturity in the technology life cycles. The trick is about getting them together and my bet is that Microsoft, Google, Apple, IBM and a couple of the technology companies would already be working to get the concepts in place.

Google Glass-How it works!

Microsoft Photosynth

Google Glasses are in the developmental stages now and have been demoed for LAYAR (Layered Augmented Reality Apps). Microsoft’s Photosynth is still in the concept development phases. Understandably both concepts and the PoCs are in infancy right now with a lot of rough edges, but then these technologies will be truly disruptive once they start hitting the threshold/critical state numbers.

The Google Glass demo!

The disruption will be in the sense that there will be no additional devices such as smartphones and tablets – it will be a wearable unit controlled by human interfaces- speech, motion etc. To that extent it is clearly disrupting the smartphone kind of interface signaling the end of the smartphone era.

The obituary for Smartphones is done and dusted. The key challenge for wearable computing will be the productization of these concepts. Watch this space for more.

Real Time, wearable – here and now computing courtesy Google Glasses

Posted in Applications and User Interfaces, Internet and Search by Manas Ganguly on July 2, 2012

Real Time, wearable computing, here and now data elements – Google Glass is more than just a smart pair of glasses with an integrated heads-up display and a battery hidden inside the frame. Like Apple’s Siri, it’s technology with enormous potential. The idea is to deliver augmented reality, with information that’s directly relevant to your surroundings appearing in front of you whenever you need it. Google’s business is about making money from advertising, and glass marks the transition from screen to reality with the information layer literally juxtaposed. Many might worry that Google Glass is its attempt to monetize eyeballs quite literally, by blasting ads whenever the user look’s at something. While the initial videos and demos revolve n the photos aspect of the Glasses, the ability to monetize LAYAR through the Glasses is Google’s next billion $ gambit.

Even while wearable computing is not a new idea, but Google’s enormous bank account and can-do attitude means that Project Glass could well be the first product to do significant numbers.

Uncannily common in ad delivery, Glasses is not to be confused by Google Goggles which is an app that can search the web based on photos and scans. Google Glass is hardware doing much the same but on a more integrated scale. Google’s Project Glass glasses will use a transparent LCD or AMOLED display to put information in front of the user. It uses a camera and GPS for location sensing, and uses head movements for movements such as scroll and click on information, something that is apparently quite easy to master. Google Glasses will also use voice input and output. Glass will run Android, will include a small screen in front of the eye of the user and will have motion sensors, GPS and either 3G or 4G data connections. Glass is designed to be a stand-alone device rather than an Android phone peripheral. It should connect to a smartphone via Wi-Fi or Bluetooth 4.0. It communicates directly with the cloud. There is also a front-facing camera and a flash, although it’s not a multi-megapixel monster, and the most recent prototype’s screen isn’t transparent.

Currently in a prototype stage, Google Glasses which is expected to be commercially launched in 2014, has key challenges- making a screen that works in darkness and in bright sunlight is tough – and mobile display technology doesn’t offer dynamic focusing, which reads the eye to deliver perfectly clear visuals. Current wearable displays have to be two feet away from your face, heads-up displays can be distracting, and there may be safety issues too. There are privacy implications too. Never mind your web history: Google Glass might record everything that the user is seeing and doing. Also there is this usability issue- Glasses will possibly not be useful in the rain (yet!)

It is expected that the Glasses are expected to cost around the price of current smartphones. Rumour indicate that Glasses may even end up in contact lenses and Google has in works a contact lens with embedded electronics.

Profiling Augmented Reality(AR)(Part IV):Endlines

Posted in The Technology Ecosystem by Manas Ganguly on September 10, 2009

TNT post’s viral with Layar on YouTube

Japanese net culture expert Toshinao Sasaki thinks that the boundaries between real and virtual, public and private will disintegrate further as augmented reality spreads. “While the internet sphere and the reality sphere had previously been completely separate as ‘virtual and non-virtual’, mobile phones have rapidly shrunk the distance between them … Eventually, it seems possible that mobile phones might play the role of a kind of supplementary brain,”.

So perhaps augmented reality will make cyborgs of all of us, although we’ll have our prosthetic brains in our pockets rather than welded to our skulls, Schwarznegger-style. Until the batteries run out at least. And for the sake of our sanity, being able to tune out is going to be every bit as important as tuning in. As Rob Hale wisely points out, “the most important thing about augmented reality will be the ability to turn it off.”

Tagged with:

Profiling Augmented Reality(AR) (Part III): Gaming, the biggest beneficiary

Posted in The Technology Ecosystem by Manas Ganguly on September 10, 2009

Other applications of augmented reality lead us right back into Terminator territory: “There was a project,” says Dr David England, computer science lecturer at Liverpool John Moore University, “where people wearing visors and backpacks re-enacted a virtual Pacman outside, so they ran down the road shooting the enemy they could see in their visors,” he says, referring to the popular computer game.

“Human Pacman” took place in Singapore University, but similar though less-exciting experiments with augmented or mixed reality gaming have taken place around the world. “The main problem is visor quality,” Dr England explains. “When mass-produced visors improve, you’ll see a lot more of this kind of game.”

It’s called pervasive gaming and a group set up to explore it, IPerG (Integrated Project on Pervasive Gaming) describes it as “a radically new game form that extends gaming experiences out into the physical world”. Which sounds dangerous. However, Dr England says the risks are similar to driving when using a mobile phone: “Of course there will have to be basic common sense.” He adds that AR can be put to much more co-operative and educative uses: letting people leave virtual treasure trails, or plant imaginary gardens together using the Facebook app Farmville, for example. “You get more co-operative games in mixed reality, people in different places can come together and share tasks”. He also mentions a mixed reality table that sketches science diagrams and can be used as a teaching tool. There’s also a belt that can help its wearer avoid collisions in the dark by vibrating when it senses an approaching object.

Rob Hale – games designer and author of the Games Design Blog – thinks augmented reality gaming will be different from its virtual equivalent: “Getting hit by a car in reality hurts a lot more than it does in a videogame,” he says. But he sees possibilities as well. In sport for example: “AR could allow us to bring the best bits of video games to real-life sports such as feedback on your performance and positions of your team mates … If you were going for a run, you could bring up a display of your previous best times, heart rate and the locations of other joggers nearby.”

Tagged with: ,

Profiling Augmented Reality(AR)(Part II): How it works?

Posted in The Technology Ecosystem by Manas Ganguly on September 10, 2009

Augmented Reality, the class of technologies that overlay data on top a user’s view of the real world, is a very hot field right now. Mobile AR apps, like Layar and Wikitude are getting the most attention, but there are other ways Augmented Reality can be implemented beyond the mobile phone.

The following video is a good LAYAR tutorial.

Virtual reality used to be technology’s Holy Grail. It created new worlds where one could jack cars, date babes and win wars. In contrast, augmented reality – in which computer graphics are layered onto a real world image – was the boring sub-technology confined to sports footage replays and technical engineering. This one is more to showing the path of a tennis ball after a player has hit it, or demonstrates to an engineer how to piece a complex machine together by modelling it in 3D. But with augmented reality about to be opened up to the mobile phone-owning masses, it has become an exciting field for development. Developers are racing to find useful and interesting ways that computers can enhance our interaction with the real world. And that could be by superimposing reviews on restaurants, directions on streets and Facebook profiles on people. It could be trivial, it could be fascinating. Perhaps the most useful application hasn’t been figured out yet.

How it works?

The technology that has brought augmented reality to mobiles is called LAYAR. As the name suggests, it layers computer information on top of “reality” as seen through the phone’s camera. Layar uses the phone’s Global Positioning System (GPS) to work out where you are and what you are looking at. Different types of information appear on different layers.

1.Sightseeing layer,Wikitude which contains tourist information about what’s around you – the posts are linked to specific buildings and points of interest (the Tate Liverpool, the Belfast docks, Big Ben) and pop up, like virtual blue plaques, as you pass by them

2.Estate agent layer, which flags up properties for sale or rent: as scanned by a camera down a street, available properties are highlighted.

3.Another layer, called Trulia, will also tell how much the house is going for, giving pictures of the inside and a phone number to ring to arrange a viewing.

4.Not limited to ” seen” things in the viewfinder – properties two streets away will show up too. There’s a layer which tells where the nearest council facilities are and plots them on an image of the street.

5.Applications like Brightkite bring social networks into augmented reality. Partnered with Layar, it offers an app which lets see where friends are on a real-time map. Like the magic map in Harry Potter, they appear as moving dots.

6.Even more excitingly, posts – pictures, messages, videos can be created which can be anchored or “geo-tagged” to the places. They stick in that virtual space like graffiti or post-it notes and one can view them later. One can also view  posts of friends or other peoples’.

Check the video which stars the Android powered and LAYAR at work. Fascinating demo which includes teh LAYAR utilities and Applications and thoughts about the business models for LAYAR. (Note: LAYAR is available on Android phones for their Maps and Compass utilities)

As with every new technology, Application developers have started developing the Layers on layers of virtual content (Imagine Twitter, Facebook, Hotel Info, Tourist Info, Promotion window etc). While some posts will be more interesting than others, but, like with the rest of the internet, we’ll get used to searching and filtering for what interests us.

Profiling Augmented Reality(AR)(Part I): Shape of things to come

Posted in The Technology Ecosystem by Manas Ganguly on September 10, 2009

Augmented reality has been a part of the future tech folklore for a while now. However, it has been catching more attention now than ever and this video released by Nokia is a crude portrayal of AR would work in the future. Crude? Because of the extreme limitedness of technology that has been showcased. 

So then, what is augmented reality?

Imagine if everything you pointed your phone at – from people to pets, shops to mountains – had its own ‘bubble’ of information. It sounds like science fiction, but augmented reality is already here.

AR

If Arnold Schwarzenegger’s Terminator had walked through, that landscape, this is the sort of information his computer-enhanced vision might have provided. But “Terminator vision” is no longer just in the realm of science fiction films. This is Augmented reality and it is due soon on Smartphones around.

Complete Terminator vision would require bionic contact lenses … but September 2009 onwards, anyone with an iPhone will be able to peer at the world through the phone’s camera and will see layered on their phone screen extra information about the physical things in front of them. Phones with a phone that supports Google Android can already do this. iPhone AppStore augmented reality app for Paris Metro Subway has already been released, and the first AR apps of this kind have also made it to the US. Mobilizy already has already taken the plunge into AR, by launching Wikitude Drive on the Android, an augmented reality GPS navigation app for smartphones. Here’s the video:

%d bloggers like this: