I'm going to describe a way of doing subtitles that may be extremely good for learning the target language. I'm probably not the first to think of it, but I can't find reports of previous attempts. Maybe the reader can show me a paper. Or maybe it's never been tried, maybe the reader can tell me whether this is a dumb idea. Maybe it's not dumb, maybe the reader can reap the glory of vindicating it by being the first to implement it.
Fine Mapping Subtitles are subtitles where words (or parts of words) in the subtitles animate in some way (for example, moving or glowing or becoming underlined), right as words are spoken in the voiceover that share their meaning.
I feel like fine mapping might make it extremely easy for a viewer to learn a lot of the voiceover language just by watching the subtitles and listening. One of the initial obstacles to learning is that you don't always realise immediately which word in the subtitle corresponds to which word in the voiceover. Sometimes the mapping between languages is quite complicated, sometimes the mapping is only partial. Given fine mapping, you'd know when a word in the translation maps across, and you know what sounds it's mapping to. You will be shown quite clearly what each spoken word means as the translation animates along with it.
If neurons for meaning and sound are consistently induced to fire together, they may, as it is said, wire together. This may make learning so easy that it happens accidentally, as a conditioned response. I think this is worth experimenting with. Stranger things have turned out to work.
Admittedly, making fine mapping subtitles will be kind of challenging. You'd need to mark like 9 times as many points of synchronisation and your timing needs to be exactly on. You would need to be willing to put in a lot more effort for something only a small fraction of viewers would materially benefit from, but if it is ever demonstrated that these types of subtitles have benefits, they might come to be in highest demand.
To reply you need to sign in.