From 1967 to 1970, the artist Brian O'Doherty composed a series of structural plays.
The function of these plays was to translate language into space. Some factors were lost in translation: body language, facial expressions, nuance. Other factors were created in translation, or exaggerated: movement, emphasis, mirroring.
Here's the script for Domicile:
And here's what a performance looked like:
I'm not sure whether these plays would be any fun to watch but I was immediately drawn to the formalism of the grids, the aesthetic of the drawn paths, the tension between movement and dialog, the translation of language to space.
I started sketching out how we might make something interesting for today's world.
Projection mapping and computer vision make it possible to further tighten the translation between language and space. For example, words could be projected onto tiles. And the actors' positions could be tracked and converted into language.
Machine learning enables the production of plays that are not pre-determined. Composition and performance are collapsed into one action as actors literally “walk their lines."
Accessibility is the natural outcome. Playwrights or professional actors become unnecessary, and participants can drift freely between stage and seats.
The result is what we could call metastructural theater. This is both a playful nod to O'Doherty's work and an accurate description: metastructural theater would allow the production of structural plays like — and unlike — O'Doherty's.
Some initial sketches: