• Melon Mag
  • Posts
  • Vol. 8 | Bilingual brain implants and AI-powered neurotech

Vol. 8 | Bilingual brain implants and AI-powered neurotech

How close are we *really* to AI reading your mind?

Bilingual brain implants and the future of neurotechnology ;

We thought we’d kick off your Wednesday morning with something a little light 😉

While bilingual brain implants are not the shortcut to speaking French we asked for (don’t delete your Duolingo app just yet…), this incredible technology is bringing hope to those with limited brain or body functioning (such as losing the ability to speak due to brain injury).

And, it’s just one example of the amazing advances being made in the field of neurotechnology. You may have come across neurotech in the headlines as Elon Musk’s company Neuralink recently performed a successful brain implant on a human patient for the first time (Source: The Conversation). But brain implants (known as invasive brain-computer interfaces, or BCIs) have been around in some form since about the 70’s. However, in the past two decades, advances in other fields of science and tech (such as AI or neurosurgery) have contributed to major breakthroughs.

Proponents believe that neurotechnologies like these have the potential to treat disease, enhance cognition, and further blur the lines between human and machine 🦾 And now, with the power of artificial intelligence, these possibilities are seemingly limitless. However, with uncharted territory comes the discovery of new problems, like where does your brain data go? And what are the implications for privacy, equality, and identity?

So in this edition, we’ll start with a breakdown of what brain-computer interfaces are, and how AI is supercharging them. Then finally we’ll share the incredible story of the bilingual brain implant that brought speech back to a man who lost the ability to speak after a stroke when he was 20 years old.

Stay tuned next week for part-two in this neurotech and AI series, where we’ll cover the ethical dilemmas that are arising as neurotech powers ahead.

First, to make sense of it all, what is a brain-computer interface?

Brain-computer interfaces (or BCIs) are direct communication pathways from the brain to an external device. These can be non-invasive (think MRIs, EEGs or other external devices that sit outside the head), partially invasive (under the skin but outside the skull), or invasive (implants that sit inside the skull and usually on/in the brain). They can decode neural activity and translate it into commands that can control computers, prosthetics, or other devices, or provide insights into brain function.

With each level of invasiveness, there is the potential for a better quality connection with the source of the brain signals, which in theory enables better data capture. However, as the name would suggest, an invasive BCI involves complex surgery to implant electrodes directly within the brain, and usually requires breaking the blood brain barrier.

Because of this, partially invasive BCIs have often been used in research settings to study brain function and develop new treatments for neurological disorders. They may also have potential applications for people with paralysis or other disabilities, allowing them to control prosthetic limbs or communicate through thoughts.

Promisingly, one Australian company, Synchron, have even been able to bypass open-skull surgery by using the brain’s blood vessels to implant electrodes into the brain. Synchron’s ability to implant devices without the need for such high risk surgery holds huge promise in making this technology more accessible.

Brain-computer interfaces (BCIs) have long held the promise of revolutionising how we interact with technology and understand our own brains. And while the concept of directly linking our minds to machines has been around for decades (both in science and pop-culture), recent advancements in artificial intelligence are accelerating progress in this field, opening up new possibilities for research, therapy, and communication.

Ok, so that’s BCI’s… but where does AI come in to all of this?

One of the primary ways AI is enhancing BCI research is through its ability to analyse and interpret the vast amounts of neural data generated by these interfaces. Traditionally, researchers have relied on manual methods to decipher the complex patterns of brain activity, which is time-consuming and prone to error. AI algorithms, particularly machine learning models, can rapidly analyse these signals, identifying subtle patterns and correlations that may be missed by human observation. This not only speeds up research but also leads to more accurate and reliable results.

AI can also be used to customise BCI experiences for individual users. By learning the unique neural signatures of different individuals, AI algorithms can tailor the interface to respond more effectively to their thoughts and intentions. This personalised approach has the potential to significantly improve the usability and effectiveness of BCIs for a wide range of applications, from controlling prosthetic limbs to communicating with patients who have locked-in syndrome (paralysis that effects the ability to communicate).

AI is also doing some heavy lifting in the development of more sophisticated and intelligent BCI systems. For example, researchers are exploring the use of AI to create "closed-loop" BCIs that can adapt and respond to changes in brain activity in real time (Source: Frontiers in Neuroscience). These systems could potentially be used to regulate mood, treat neurological disorders, or even enhance cognitive function.

AI-powered BCIs are also paving the way for new forms of human-computer interaction. Imagine being able to control your devices or communicate with others simply by thinking! This could revolutionise communication for people with disabilities and open up new possibilities for creative expression and collaboration.

According to a groundbreaking paper published in Nature last week, researchers have successfully used AI systems to decode brain impulses in real-time (translating them into English or Spanish) in a man who was left unable to speak after surviving a stroke when he was 20 years old.

The man - nicknamed Pancho - was left with paralysis in most of his body, and an inability to verbalise beyond moans or grunts. Pancho, now in his 40s, is a native Spanish speaker who learned English following his stroke. Pancho partnered with researchers at the University of California, San Francisco to study how the stroke had impacted his brain (Source: New England Journal of Medicine), which led him to having a brain-computer Interface (BCI) implanted. This BCI was a series of electrodes that connected to Pancho’s brain and recorded neural activity.

His bilingual brainwaves became the interest of this study, which aimed to fill the gap in scientific understanding that had mainly focused on monolingual translations previously. Around two thirds of the global population are bilingual, with many being multilingual. For those who have lost their verbal capacity, reinstating speech in one language may not be able to capture the full extent of the conversations and relationships that extend beyond one language. The researchers noted that many bilingual speakers don’t speak one language and then speak another, but that often there will be multiple language changes in a stream of thought.

After extensively mapping the neural activity monitored by the BCI, an AI model was trained to detect the specific neural patterns of activity as Pancho thought about specific phrases. Pancho would try to say these phrases aloud, while the AI would try to match the word with the neural data it had been trained with. The AI system then works similarly to a large language model (like ChatGPT) to predict the likelihood of the next word by matching neural patterns with the probability of that word being next in the sentence.

Finally, Pancho was able to share his first sentence - “My family is outside.” His first sentence was translated in English, but the AI system has been trained to distinguish English and Spanish with an astounding 88% accuracy, paving the way for future bilingual BCI technologies.

The findings of this study are the first of their kind, and are making waves in the world of neuroscience, AI and biomedical engineering.

Thank you ;

for joining us for another week of MelonMag.

It goes without saying that the age of AI is well and truly upon us, and is likely to be one of the most dramatic technological advancements many of us will see in our lifetimes.

In the field of neurotechnology and BCIs, AI has the potential to change the game with its ability to analyse large amount of complex data, personalise the BCI experience for users, and develop intelligent systems. It’s amazing to witness how AI is accelerating progress and opening up new possibilities for research, therapy, and communication. This tech is genuinely changing lives for the better.

But, as we will cover in our next edition, there are some caveats to the promise of AI - especially when being integrated with BCIs. As these technologies become more powerful, widespread and accessible, the ethical considerations will become magnified.

We love hearing your thoughts, so be sure to let us know what you want to hear more of by clicking the survey below. The survey takes less than 2 minutes to fill out, and helps us bring you the best brain science we can!

💙 The MM Team

REFERENCES

  • Belkacem, A. N., Jamil, N., Khalid, S., & Alnajjar, F. (2023). On closed-loop brain stimulation systems for improving the quality of life of patients with neurological disorders. Frontiers in Human Neuroscience, 17. https://doi.org/10.3389/fnhum.2023.1085173

  • Higgins, N. (n.d.). Neuralink has put its first chip in a human brain. What could possibly go wrong? The Conversation. https://theconversation.com/neuralink-has-put-its-first-chip-in-a-human-brain-what-could-possibly-go-wrong-222497

  • Moses, D. A., Metzger, S. L., Liu, J. R., Anumanchipalli, G. K., Makin, J. G., Sun, P. F., Chartier, J., Dougherty, M. E., Liu, P. M., Abrams, G. M., Tu-Chan, A., Ganguly, K., & Chang, E. F. (2021). Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria. New England Journal of Medicine/the New England Journal of Medicine, 385(3), 217–227. https://doi.org/10.1056/nejmoa2027540

  • Silva, A. B., Liu, J. R., Metzger, S. L., Bhaya-Grossman, I., Dougherty, M. E., Seaton, M. P., Littlejohn, K. T., Tu-Chan, A., Ganguly, K., Moses, D. A., & Chang, E. F. (2024). A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages. Nature Biomedical Engineering. https://doi.org/10.1038/s41551-024-01207-5