For the past two years I have been creating interactive and generative artworks that leverage emergent technology such as EEG, transparent displays, and the potentials of new graphics hardware. Having won the LG Luminous prize for my ALPHA[BETA] brain computer interface project I have been developing BCI systems with the aim of increasing interface bandwidth between humans and digital systems. I am a 24 year old new media artist based in London. Since the age of 16 I have been involved in creating interactive visuals, stage sets and immersive experiences for music events and sold out club nights across London. Having organised and executed a “Tate Late” event in 2018 with my record label SouthSpace that saw over 600 visitors we were the first crew to show a piece of AI generated video in the Tate Britain with a GAN generated video that merged every single painting in the gallery. I went on to design a touring stage set for DJ Amy Becker, perform live visuals for the headline slot at the Boomtown festival AV stage “The Cyberdome” as well as creating a brand video for the launch of Nike Shox trainers and performing a live visual set alongside DJ crew PXSSY PALACE at the launch event for the shoe. In 2019/20 I co-curated an electronic album called “Mirror Sound” in collaboration with SouthSpace director Stan Sorrell where we recorded a sample bank of “the sound of London” and took this sample bank to Tokyo where we gave it to Japanese producers, we then recorded “the sound of Tokyo” and gave this to London based producers to create a musical exchange across the two cities. We went on to produce a vinyl record with the title track from the album which sold out in London record store Phonica. Since the age of 16 I have worked for an art/props fabrication company called MDM props and have been a key team member in the fabrication and installation of artworks for Richard Wilson, Yinka ilori, Monster Chetwind, Mark Quinn, Fendi stores in Paris, Milan and Rome, Hermes, Leboutin and many others including being the lead installer for two large scale pieces in Anish Kapoor’s exhibition at the 2022 Venice Biennale. My practise draws upon all of these experience and skills to produce an eclectic mix of outcomes that span genres and mediums. You can find examples of my work at @finbar_marcel on Instagram; website incoming.
ALPHA[BETA] is an exploration of Brain Computer Interfacing and an experiment in post language communication. Driven by my interest in increasing the bandwidth of communication between humans and digital interfaces, as well as enabling those who don’t have neurotypical abilities to engage with the digital world I developed an audio-visual environment that reacts to real-time EEG data produced by participants. The system allows participants to train multiple commands, namely spatial commands like “up, down, left, right, push, pull” so that they can control digital objects and the audio output of the system. The project is called ALPHA[BETA] because the system responds to the alpha and beta frequency ranges of brain signals, as well as the alphabet being an enabler of conventional communication. This piece explored the potential for interfacing the brain with digital systems, so naturally I wanted to then explore interfacing the brain with physical systems, to create an instance where mind could move matter. I decided to build what I call the “FerroDisplay”. This is a device that allows a user to control and manipulate a tank of magnetic fluid (ferrofluid) through thought alone, by controlling an array of electromagnets mapped to EEG inputs one can move the fluid within the tank, create shapes, patterns and animations. The effect of controlling matter directly with the brain is a surreal feeling of fluency, a feeling that one can manifest thoughts into physical space immediately. It is a very different experience to controlling something on screen as it feels like an extension of the limbic system rather than an extension of our communication and language systems.