Roadmap

(Monitor-)AI on next Gen

While waiting for Wing Rack/Core and recognizing everyone is now implementing AI for no reason, time for a bit of fantasy:

On “Next-Gen” Mixingconsoles you can tell an AI via dedicated Talkback to mix your (IEM-)Monitor. e.g.: „Bus 1; Vocals a bit louder“

For semi-professional Bands like ours this would be a helpful feature. We have a professional Sound-Engineer for FOH, but Monitor-wise we’re mixing ourselves (dedicated monitorconsole). Instead of using all the Mobile-Apps you can briefly tell AI what to do or let it optimize the mix (EQs..), broaden the stereo mix….

Would that be a good thing or not?