No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Adding Collaboration to Telepresence

Telepresence replicates face-to-face meetings. For example, high-definition video displays allow participants to read non-verbal communications clues such as body language and facial expressions. But the all-important collaboration aspect has, until now, been somewhat clunky. Files need to be exchanged in advance and an add-on service employed in order to go through presentations.

Solutions have come down in price and size and now they can be deployed at the desktop: see "Affordable Ubiquitous Telepresence". Apart from the financial savings, this is an important development since it enables participation from informal environments where one is more relaxed and productive. But seamless collaboration is still missing.

Regular videoconferencing solutions, including Telepresence, handle desktop sharing as a second video stream. A telecollaboration solution developed by Magor Communications, a Canadian-based, privately held company, makes desktop sharing an integral part of the Telepresence experience. Collaboration is used just like a regular application; it doesn’t change the way you work.

You make a UC-type mouse click to communicate with the participants, and collaboration (the second C in UCC) is enabled when somebody opens and shares a file. No client software is needed for simultaneous, secure sharing of multiple desktops. The solution allows authorized participants to see, control and edit shared files and documents in real time.

The Desktop Experience
All that is required is one or more screens that can display video at 1080p. This resolution is needed to pick up on those non-verbal communication cues and other background elements. The single screen, single camera solution can display up to six video and collaboration windows.

When two screens are used, as shown in Figure 1, different video and collaboration windows can be created (up to 12). This solution can have one or two cameras.

Figure 1. This shot of the two-screen solution shows how video and collaboration windows can be deployed to match individual requirements

The large black boxes house the server/servers.

The three-screen, two- or three-camera solution is designed for geographically dispersed teams and those that need to collaborate with off-site parties. In this case, up to 18 video and collaboration windows can be employed.

This solution has also been designed to reflect the dynamics of face-to-face conversations. Because each endpoint is connected via a peer-to-peer relationship, each participant has individual control over his or her visual experience.

Instead of an MCU (multipoint conferencing unit) controlling what each participant sees and how they see it, each participant can move and resize their own video and collaboration windows in real time and pan and zoom images to suit their individual needs and preferences. You can, for example, highlight the person speaking, the document that is being discussed, or the reactions of other participants.

Alternatively, the speaker can maintain eye contact with the person they are addressing and as different participants engage, eye contact will move to the new speaker.

The audio side is handled by a multi-directional HD system. Audio-only participants are accommodated via the integrated audio bridge.

Leveraging SVC
The end user experience of regular videoconferencing solutions is constrained by the architecture and the available bandwidth. MCUs employ transcoding, which introduces latency as high as 400ms, and transmitting video normally requires enough bandwidth to send large bursts of data when needed. This can significantly impact the performance of other applications running on the network. In addition, MCUs restrict the constant eye contact and natural flow found in face-to-face conversations.

These constraints have been removed by leveraging the benefits of parallel processing and employing advanced scalable video coding (SVC --for more information on how SVC works, see multiple No Jitter posts by John Bartlett of NetForecast).

This combination removes the need for a MCU because instead of transcoding the signals centrally, the solution employs independent encoding control at 1080p resolution. Each piece of video is handled separately, which is where parallel processing comes in. Every endpoint has the flexibility to dynamically adapt encoding processes that match the resources of that endpoint. This allows notebook PCs and mobile phones to participate, albeit at a lower resolution.

The bursty traffic issue is addressed by smoothing the flow of traffic over the network. The video signal is partitioned into a set of regions or segments based on the characteristics of the content, and a separate codec is used for each segment based on the motion and signal detail.

Remote Collaboration Access
The reach of a TeleCollaboration session can be extended to endpoints that don’t have a video capability. Each Magor endpoint has an integral audio bridge and an integral Web server. This allows road warriors to dial into the audio bridge for voice connectivity.

To connect to the session's collaboration material they point their browser to the integral Web server. Security is provided by VPNs.

SIP Interoperability
The solution interoperates with legacy video conferencing endpoints and MCUs via a standard SIP-based protocol, while retaining all the video, audio and collaboration capabilities of the TeleCollaboration session.

Bob Emmerson is a freelance writer who lives in The Netherlands. Email: [email protected]. Web: www.electric-words.org