No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Polycom Introduces EagleEye Camera

One of the primary reasons that Telepresence suites work so well is that users sit and face the video screen, with only one or a few people in front of each screen. Traditional video conferencing rooms put a whole table of people in front of the screen, so users on the far end often can't see participants well. Many rooms put the video system at the end of a long conference room table, making it even harder to see who is who and who is talking.

Desktop video conferencing, like telepresence, puts just one person in front of the screen, often just head and shoulders, which allows viewers to see the most important visual communications components, the eyes and face. My contention is that desktop video conferencing will often provide better visual information than a room-based system, because it has a higher number of pixels per face.

Polycom has just introduced the Eagle Eye Camera, which goes a long way towards resolving this conundrum of the video conferencing room. I had a chance to see the Polycom EagleEye Camera a few months ago and I was very impressed.

The EagleEye uses a combination of audio triangulation and facial recognition to find the speaker in the room and zoom in on that speaker to create a head and shoulders shot of that person. Camera systems that find the speaker through audio clues have been around for many years, but performance is often disappointing. Audio triangulation is not too accurate. Noises (a cough) in the room can distract the camera. And the movement of the camera (zoom out, turn, zoom in) is also distracting.

To overcome these issues, Polycom has incorporated facial recognition and added a second camera. The EagleEye system uses audio triangulation to get an approximation of where a speaker is sitting and moves the camera to focus on that individual. Once the speaker's face is in view, the camera then uses facial recognition to position the face correctly in the screen.

The second camera is used to create a room shot of the whole conference room. When a speaker stops speaking, the EagleEye switches the view to the room-shot. When someone speaks, EagleEye zooms in the moving camera and arranges the image, and then switches from the room camera to the speaker camera. This means we are no longer subjected to the zooming camera image that can be disconcerting.

When two people in the room are having a conversation back and forth, EagleEye is smart enough to back out the frame sufficiently to capture both of those speakers, rather than zooming back and forth. This means remote viewers can see both speakers and their interaction.

The goal of the video deployment should be to make the remote participants feel as much a part of the meeting as possible. Often in the past it seems like video connects the rooms but not the people. When one room hits the mute button and has a local conversation, you know you have lost. I think the EagleEye Camera will make a huge difference in connecting people to people by overcoming the limitations of the video conferencing room environment.