On Sat, Jul 11, 2015 at 5:57 PM, <robocon@ozemail.com.au> wrote:
Joseph Paul wrote:
Yeah that idea got me thinking about was actually needed of the brain wrangling multiple drones. 

Attention and consciousness are relatively narrow bandwidth, as the psychologists have shown.
So we're looking at real-time strategy game style unit management. "Go to co-ordinates (X, Y)" rather than "Thrusters 60%, roll x for y seconds, pitch n degrees..."

A lot of training for our hypothetical drone wranglers, as well as many layers of safety built into the interface is going to be required to prevent crippling hallucinations or lethal biofeedback.

Was this supposed to be pipe directly to the brain or played over the eyes with lasers?
Use as many channels as possible.

Some potential Inputs/"Displays":
- vision
- hearing
- touch: vibration, pressure, pain, itch, temperature
- proprioception (joint position sense, body orientation) 
- acceleration - semicircular canals
- taste
- smell

Some potential "control channels":
- small muscle movements - extra-ocular, blink
- (micro)vocalisations
- touch mapping e.g. tongue to teeth/gums, skin as touch pad (scratch an itch, fire a missile?)




Article I'd been working on for Freelance Traveller... but got creeped out by and never really finished. 
Parts are relevant, I think

https://docs.google.com/document/d/1i809pu4G-iUvGOMdAg7zGsvE6pqDvweH8Fzn-ojgf4A/edit?usp=sharing

Title is "Battle Dress User Interface at Higher Tech Levels"