Clio Sports Show

A Robotic Arm Powered Rag & Bone's Latest Remarkable Fashion Performance

SpecialGuest and Espadaysantacruz shift perspectives

At this fashion show, a huge robotic arm had a hand in the glitz and glamour.

Some 43 models, two dancers, a pair of drummers and a Universal Robots UR10 eSeries mechanical arm hit the runway at New York Fashion Week in September to flaunt Rag & Bone’s Spring/Summer 2020 collection. 

The arm was outfitted with cameras that fed real-time imagery to large video monitors for the 600 or so attendees gathered at the New York Mercantile Exchange to savor.

Staged by creative agency SpecialGuest and experiential/AV shop Espadaysantacruz Studio (with production company 1stAveMachine), the event lasted 15 minutes, fusing fashion and technology into a hybrid slice of performance art. 

You can check out highlights and behind-the-scenes footage in this clip:

The sight of that bot joining in the spectacle—its metallic surfaces keeping the beat with those flesh-and-blood dancers—was impressive in and of itself, a reflection of the outsized role technology plays in most aspects of human endeavor these days. 

But supplying a video feed was its main mission, and the arm filtered the action through two different lenses. One camera captured footage of the models and dancers as they did their thing, while an A.I.-driven Azure Kinect produced three-dimensional scans. As the music swelled, stylized representations of the performers flashed across the screens. At times, their bodies seemed to blend into a single form, then suddenly break apart, adding a futuristic, ethereal aura to the show (and, perhaps, making a statement about the transformative power of the creative spirit). 

This marks SpecialGuest’s third outré outing for Rag & Bone, following a film project with cinematographer Darius Khondji and this VIP dinner hosted by artificial intelligence. (As for robotic arms, Bombay Sapphire deployed such automatons for this art installation in Los Angeles.) 

Below, SpecialGuest co-founder Aaron Duffy and Espadaysantacruz co-founder/CEO Miguel Espada field Muse’s questions about the Fashion Week foray: 

Muse: Where’d this idea come from?

Aaron Duffy: Each time Rag & Bone founder Marcus Wainwright has reached out, we have sat down for a long brainstorm. Marcus has a sharp vision for Rag & Bone, and he knew he wanted a large circular space with a chorus and two drummers—from the Thom Yorke superband Atoms for Peace. But he also wanted to develop a new technological way to visualize the collection, live, during the show. Months earlier, Miguel Espada had shown me some tests he was doing with robotic arms with his studio, Espadaysantacruz, so I decided to pitch Marcus a mashup of fashion show models, dancers and robots. Marcus liked it. We gave the robot the responsibility of capturing the show, but in its own way. We wanted the robot to have a traditional video view of the show, but also a more digital view—captured in a point-cloud array. All of these ideas came together very quickly, and all the while I was checking in with Miguel wondering, “Can we pull this off? Is it too risky to do it live?” 

Why does fusing tech and fashion make sense for the brand?

Aaron Duffy: Not to overgeneralize, but new fashion gives us a glimpse of where a culture is going and how it is looking or feeling. It’s a new perspective on how people will be presenting themselves. So fusing new technologies with these projects has not been the starting point, but it has been the method for creating the right kind of perspective shift for the viewer. I’ve learned on these projects that fashion can split the difference between art and commerce in such a way that we accept abstraction, but also consider its tangible relevance to our everyday lives. We’re really lucky to be able to work on projects like this, because A.I., robotics and machine vision are new parts of our lives that we need to grapple with. Doing it through a fashion show is kind of perfect.

Can you talk about the tech setup at the show?

Miguel Espada: The robot had two different cameras: a normal camera and an Azure Kinect that captures reality in three dimensions—that is, it creates three-dimensional points in space, and all those points form a scan of the reality. During the show, we used both ways of looking, reflecting in a metaphorical way the process of analysis and compression of reality by the robot. During the narrative of the show, at first the robot’s vision is purely analytical, based on data. For that we almost exclusively used the point cloud. As the show was moving on, we mixed real images with the point cloud. 

What kind of prep went on beforehand?

Miguel Espada: We were in a warehouse for three weeks trying to push the possibilities to the limit. Working with technology, especially when everything is programmed—as is our approach—is very, very slow and sometimes frustrating. You have an idea of ​​how to generate a new pattern of movement, then you spend hours or days coding it without seeing any results. Sometimes the result is great, and other times it is not, so you have to start over.

What were your biggest challenges?

Miguel Espada: Perhaps the most challenging part was the work with the dancers. We wanted the interaction with them to be interesting both in terms of image and movement. For this, we developed a system in which the robot could learn the movements directly from the dancers in an organic way. Otherwise, it would have been impossible to create movements with the necessary rhythm and cadence.

Did a human drive the arm, or were all its moves pre-programmed?

Miguel Espada: In the show, we use a mix of prerecorded commands and manual control. Our initial idea was to have everything pre-recorded … and let the show run on its own. That would have been the most logical and safe thing to do, and what every person with common sense would have done for such a risky show. As we were doing experiments and rehearsing, we realized it was important to modify the behavior during the show … to give much more expressiveness. So, we developed a mixed system, based on cues and marks, then we could tell the robot the next sequence of movements. Real-time control also allowed us to improvise—adjusting positions, speed and rotation at all times. In the final show, this was very important, because we could always go to the most interesting part [for the video feeds to capture]: models, percussionists and dancers. At the end of the show, Marcus came out to say hello—he usually doesn’t like doing it—so the robot could follow him and get a magnificent shot with the camera rotating upside-down.

Any glitches you had to fix on the fly?

Miguel Espada: The final show was quite intense. That day, it was raining a lot in New York, so some models were delayed. Until the very last moment, we did not know how many models were going to appear. This was very important, because the system was based on a more or less regular cadence of models. Thankfully, we could make adjustments in real time.

Another story, which is now fun but at the time was not, is the decapitation of the robot. [During rehearsal], just as we were going to start the last phase of content creation, the robot moved in such a way that it cut off its own head—and it flew off. These robots withstand extreme working conditions, but the unit we had was defective. Fortunately we got a more modern robot and, thanks also to the support of the Universal Robots engineers, we managed to update all the software for the new unit in record time and prevent any more problems like that.

The Clio Awards - Creative Summit