Canon’s Mixed-Reality Technology ....


Canon begins selling a next-generation form of virtual reality technology known as mixed reality (MR) this month. The company suggests its version of MR is an enhanced, more grown-up version of the augmented reality provided by some smartphone apps and things like Google’s Project Glass. In contrast to augmented reality, which typically adds text or simple graphics to what the user sees, Canon’s MR adds computer-generated virtual objects to the real world in real time, at full scale, and in three dimensions.

In a further contrast to consumer-oriented augmented-reality schemes, the technology is initially targeted at engineering groups involved in designing and building new products. Canon claims that not only will it cut down on prototyping, it will also speed up concurrent engineering by allowing those involved on the manufacturing side to get a faster look at what is coming down the new-product pipeline.
The key technologies making this possible are packed into a video see-through head-mounted display (HMD). The HMD uses two charge-coupled-device (CCD) video cameras, one for each eye, to capture video from the real world, which is sent via cable (attached to the HMD) to a computer for integration with the computer-generated graphics or computer-aided-design data to be overlaid. The synthesized video is then sent back to twin SXGA-resolution displays in the HMD, which reflect the images through an optical system in the helmet and then into the eyes. The optics include a free-form three-sided prism for each eye, which refracts and reflects the image several times, enlarging the video and removing aberrations in the process, so “the images shown on the displays appear full size,” says Takashi Aso, deputy senior general manager of Canon’s Image Communications Products Operations.
When I was trying out the technology, Aso had me wear the HMD while I stood in front of a box the size of an office copy machine. Around the machine were placed three types of registration markers—white square boards with patterns of black hexagons in three sizes corresponding to their location on the box. Each marker had its own identification number, which enabled the computer to calculate my position vis-à-vis the box.
The technology works something like a bar-code setup. “But with a bar code, you need to scan the code straight on,” says Aso. “Because you will be moving around the object, the patterns have to be recognizable from any number of angles. We’ve found that hexagonal-shaped patterns are the best for reducing errors and for giving consistently good results.”
A second computer used the marker location data captured by the CCD cameras to overlay a realistic computer-generated image of a copy machine onto the box I was “viewing.” I was able to press a virtual button on the “machine’s” control panel, which triggered a preprogrammed demonstration that included the opening of a door panel and the rolling out of the toner mechanism. I also walked around the machine to observe it from all sides, noticing little delay as my view of it changed. I was particularly impressed when I was able to grasp the edges of the machine (the box) and turn it in real time.
The second demonstration replaced the registration markers with 10 optical motion-capture sensors placed strategically around a space to locate my position. These sensors are programmed to locate a group of five rods mounted on the HMD. To track several people viewing an object simultaneously, each HMD has a different pattern of rods, so that the PC can identify each individual.
As I donned the HMD again, a full-size car appeared before me. Because it was much longer than a virtual copy machine, it was apparent that the width of my view was restricted to just under half of what I would be able to see without the HMD, requiring me to turn my head more than usual to take in the entire vehicle. I was guided around the car and was asked to sit on a physical chair placed inside the simulation, from which I could view the dashboard and the car’s interior.
You might think a virtual tour of a sports car would be more exciting than one of a copy machine, but in Canon’s demonstration, the opposite was true. While the simulation was useful for viewing the color, style, and interior design of the car, the lack of movement and things to grasp, such as a steering wheel, made the overall experience less interesting than the copy-machine demonstration.
One drawback is that the HMD is cumbersome, as the attached location rods make it fairly weighty. It didn’t feel entirely secure when I moved my head quickly, and movement is restricted because of the attached cable. Aso estimates that simulating an object takes about 250 megabytes of video data per second, which falls within the bandwidth of wireless communications. Wireless technology itself, however, is not yet at a level that would support transmitting four channels of video data (two channels each of sending and receiving) to such a compact HMD with no delay.
Canon will market the MR technology as a complete system, first in Japan, then overseas—possibly as early as the end of the year in the United States. Pricing will depend on what customers can afford. A system that uses just the registration markers (the copy machine) will be less expensive—suitable, for instance, for a museum or a smaller company—than one that uses the high-quality optical motion-capture sensors (the car). Aso estimates the price of a basic system, employing a single HMD, at around US $125 000 and that of the advanced system, with motion-capture sensors, at around $500 000.

Comments

Popular posts from this blog

Autonomous mobile additive manufacturing robot runs circles around traditional 3D printers

How to hack your xbox 360 completely

The power of Bluetooth 4.0