Greetings from day two of "On-line Learning Futures Festival", being held online world wide. The first session today was moderated from USQ in Queensland, Australia but presented from the University of the South Pacific in Fiji on Shingle teaching. The challenges of teaching in a second or third language with campuses spread over a large part of the Earth are daunting. In 2005 I spent a week in Apia, Samoa, teaching web design for Pacific museum staff and saw some of the challenges first hand.
Yesterday I spent much of the day "at" the futures conference. This caused much amusement from my colleagues, who could see me sitting there with a headset on staring intently at my screen and occasionally furiously tying comments. They threatened to hang a sign on my back saying "Rocket Scientist".
The slides and presentable audio worked very well with the "Blackboard Collaborate" software. The live video of the presenter did not work so well. USQ had only one fixed camera, with no zoom function. As a result there was just a wide shot, showing the presenter and some USQ signs. What would have been better would be to zoom into a closeup of the presenter after the initial introduction.
The text chat function worked well as a back-channel to handle routine questions. There were major problems with audio questions from participants. The moderator had to keep reminding participants to turn on their microphone and turn it off leftwards and their video. I could not understand what the problem was until I tried it myself this morning. Blackboard Collaborate has a button for talk and one for video. However, it is very difficult to tell when these are off or on. This is a serious flaw in the design of the software's interface which caused considerable difficulty.
It is remarkable that the web based video products are little better than the video conferencing software I (tried) to use 15 years ago: at one government agency we had a very expensive video conference system which was mostly used a jukebox for playing CDs at Friday afternoon drinks. ;-)
Recently I have been thinking about the way what educators call synchronous online education can be combined with what they call asynchronous. I concussed that this terms are technically incorrect and misleading. The terms used in computer science of "real-time" and "store and forward" would be clearer. It may be that educators have asked their software developers for a synchronous tool and the result is the problems experienced with Blackboard Collaborate (and similar products). This is partly a technical problem, in that the Internet service can't keep up with the real-time requirements, but partly a conceptual one, in that the face-to-face events being emulated are not purely synchronous.
Most of the time a live event is a combination of near real time and store and forward. That is participants are all listening to one person speak but are each doing other things on their own. I suggest that if the requirements for real-time were relaxed for the software the result would be something which coped with network problems better and also better matched what actually happens at an event.
I suggest that an online collaboration product could be designed which only supports half duplex commutation: that is only one person could talk and transmit at a time. Speakers would have to take turns. Also a delay of at least a second could be inserted between speakers and a time limit would be set for how long someone could talk without interruption (perhaps 30 seconds, being the speaking equivalent of an SMS message). This would make for a very stilted conversation between two people, but not impede a presentation where one person is talking much of the time and then takes questions. The system might also automatically queue the speakers.
Another function would be to have the ability to pause the session, rewind and jump to any point in the past, or the future. Current systems seem to do recording like an old fashioned VCR: you have to wait until the session is over before you can replay the recording. If each audio/video item from a person is limited to less than a minute and there is a pause between each, this would make it possible to compress, index and store each and make it available for instant replay.