After showing some early work on experimental 3D sensors, Dr Izadi concentrated on use of the Microsoft Kinect device. He pointed out that this hardware is not a real 3d sensor and it needs software enhancement to extract 3d data from the camera. He provided a fascinating live demonstration of the process (at one stage I looked up and found that myself and my laptop had been scanned).
The important point of the demonstration is that a relatively low cost sensor can be made to do far more with sophisticated software. It will be interesting to see what non-game applications may be developed.
New research is using two digital camera to provide depth data, which would have advantages over the current Kinect device. This could work outdoors and work in a small hand-held device (such as a mobile phone, tablet, or a pair of glasses).
At the end of the talk Dr Izadi mentioned collaboration with researchers in other fields. But it struck me that the examples given were all very scientific and technical (such as design of 3D objects and enhancing surgery). It struck me that input from performers and artists could be fruitful. As an example, Dr Izadi is a very entertaining presenter, with good use of gestures and movement, but he then has to become very static in order to operate the presentation on his laptop. Providing a gesture interface for Microsoft Powerpoint, using the Kinect (or a web camera), would seem an interesting area for research, as well as a potentially popular commercial product. In its simplest form as standard set of gestures, which the presenter has to learn could be used. A more sophisticated approach would have the software learn the presenter's natural gestures.
As well as the familiar Research Labs in the USA, UK, India and China, Microsoft has also established Advanced Technology Labs (ATL) at Cairo, Europe and Israel. The ATLs are focused more on joint work with local organizations and product development, whereas the research labs have carried out long term research on their own. Some years ago I visited Microsoft Research Cambridge and found it a little insular.
Experiencing computing in magical ways.
Dr Shahram Izadi (Computer Mediated Living group at Microsoft Research Cambridge, UK)
COMPUTER SCIENCE SEMINAR
Information and Human-Centred Computing
TIME: 16:30:00 - 17:30:00
LOCATION: RSISE Seminar Room, ground floor, building 115, cnr. North and Daley Roads, ANU
Dr Shahram Izadi will present some of his recent work - see his Bio below for a richer description.
Dr Shahram Izadi is a research scientist within the Computer Mediated Living group at Microsoft Research Cambridge. He co-leads the Interactive 3D Technologies (i3D) sub-group, and holds a visiting professorship in the Virtual Environments and Computer Graphics group at University College London (UCL). He describes his work as: aMashing together exotic sensing and display hardware with signal processing, vision and graphics algorithms to create new interactive systems, which enable users to experiences computing in magical waysa. Some of his most notable projects and publications to date include: KinectFusion; Mouse 2.0; SecondLight; and ThinSight. Shahram has been at Microsoft Research since 2005 and prior to that spent time at Xerox PARC. He received a TR35 award in 2009 and was nominated one of the Microsoft Next in 2012. He lives in Cambridge, UK, with his wife and daughter.