• 7 hours ago
MindPortal aims to revolutionize communication by using AI models to convert brain signals into text, potentially paving the way for telepathy and seamless interaction using ChatGPT. Imagine thought-to-text communication! However, as our AI Editor Ryan Morrison discovered, this AI mind reader technology still has a long journey ahead. Find out why the experiment didn’t quite hit the mark and what challenges lie ahead for this ambitious startup.

Category

🤖
Tech
Transcript
00:00Have you ever thought tech is listening to your thoughts?
00:02Well, that's exactly what I'm here at Startup MindPortal to have done.
00:07They've created a new technology that can read your mind and send it to chat GPT
00:12so you can have a conversation without having to type or speak.
00:15Wish me luck.
00:19Hi.
00:20Hi Ryan, good to see you.
00:21Hi, good to see you.
00:22So tell me a little bit about MindPortal.
00:24What is it?
00:25How did the idea come about?
00:26MindPortal is a human-AI interaction company
00:30and it was founded after a year of me taking psychedelic substances.
00:36Okay.
00:37During those abstract thinking sessions, a lot of visions came out.
00:40So during those sessions, what I set as kind of the thinking goal of those sessions
00:45is the future of humanity, how to make an impact, etc.
00:49But really the premise of MindPortal is we want to explore the nature of human-AI interaction.
00:54How can you make a future that's symbiotic, which biologically is win-win?
00:59We're going to look at a demo today where we can chat to chat GPT using our brain.
01:03Yes, so this is the world's first demonstration.
01:06Like never before has this been done.
01:08Can you bridge the gap between the human and their thoughts,
01:11which is the most intimate form of communication, what you think, what you feel,
01:15and an AI agent that can understand and respond to those thoughts?
01:19All right, should we try it?
01:21Yes, let's do it.
01:22Tell me what we're looking at now.
01:24So you've got something called an FNIR system,
01:27which is able to record brain data, optical brain data.
01:30Okay.
01:31Based on the blood flow happening in Ed's brain.
01:33Yeah.
01:34When Ed imagines language, that obviously activates different parts of the brain.
01:39Yeah.
01:39And it's that activation that's being picked up in real time.
01:42It's also the first demonstration of communicating with chat GPT as well.
01:47Using just your mind.
01:49Exactly.
01:49You'll think a thought, the sentence is classified,
01:51but that classified sentence then becomes an input into chat GPT to then respond to you.
01:57Okay.
01:57We're going to have a look at it.
01:59What's the goal?
02:00What do you want to do with this?
02:01Where do you see it going?
02:03This system, if you were to spend time building it towards commercialization.
02:07Yeah.
02:07You could scale it in a multitude of different ways.
02:10Okay.
02:10So number one is you could scale the number of sentences,
02:13and then of course you scale the accuracy.
02:16What are we doing first?
02:17So if you'd like to pick a sentence.
02:19All right.
02:19Well, let's go for Venus.
02:21I'm a space buff.
02:22Yep.
02:22I'm going to imagine this.
02:23Well, I'll read it out loud then,
02:25because if you read it out loud,
02:26then it raises the question of, is it just taking it from your voice?
02:29So you're going to think in your mind,
02:32if I were on Venus, I would be in a world of extremes.
02:35The pressure would feel like being a kilometer underwater,
02:38crushing you from all sides.
02:39The air is a corrosive nightmare capable of dissolving metal,
02:42and forget about rain, it's sulfuric.
02:45So you're thinking that.
02:46Yep.
02:47That's a long sentence to imagine.
02:49So you can't just imagine the visual of being on Venus.
02:52You've got to imagine the actual words in that sentence.
02:55Currently, yeah, we're trying to extract the semantics from that.
02:58All right.
02:58Well, let's go.
02:59Let's see how that happens.
03:01You're going to think those words,
03:04and hopefully the chat GPT will respond.
03:10So you sent that off as the prompt to your decoder.
03:14So now the decoder basically is taking the brain data
03:16and trying to identify which of the sentences he was thinking.
03:19Okay.
03:20And then over there is outputting the sentence.
03:22So you got it wrong.
03:26So this is the restaurant sentence.
03:28Okay.
03:29So it was sentence number two, I think.
03:30And this is the brain data that, as he was thinking that sentence,
03:34went into the system.
03:36Yeah, we can go to another one,
03:37and we can show you basically how this progresses as he's imagining.
03:40What's the one that works more often than not?
03:43If we're sticking to probability.
03:46Let's try it.
03:46Let's try again.
03:47Let's see how.
03:48Let's see if it works.
03:54Hearing that.
03:54And right.
03:55Okay.
03:55So this time he's had a chat with mum on the phone,
03:58but it is showing that you can have this conversation.
04:02It's just a case of scale.
04:04Correct.
04:05So with enough data, with a larger model,
04:08we hypothesize, as we've seen in AI,
04:11or with any breakthroughs in their infancy,
04:13the accuracy would improve.
04:15And then you'd start to increasingly have the conversation you want
04:18without the incorrect inputs.
04:20In essence, what's happening each time,
04:21there's an incorrect input, and then there's correct ones.
04:24And correct ones are happening enough times for us to know this works.
04:27Okay.
04:28Scaling data and scaling the model is,
04:30let's get it to work more and more times with reduced error, in essence.
04:33Should we try it one more time?
04:37At least we've seen them all now.
04:39No, it should be stressed.
04:40This is very early stage.
04:41We're looking at a sort of research preview of a technology
04:44that with enough scale will improve potentially exponentially.
04:50Where do we go?
04:51Am I going to be able to go into a supermarket
04:55and look at a product and say in my head to my AI,
05:01can I eat this?
05:02And have the AI pick up my thoughts and respond.
05:04Can we get to that point?
05:06Yeah, I think we can.
05:07And the reason being is we've seen this again and again in AI.
05:12As you increase the amount of data,
05:14as you increase the model size, you get better performance.
05:18So the time constraint, honestly, there is how many people...
05:21And financial constraint is around
05:25how many people can you collect brain data from?
05:27Because unlike going online or using written sources,
05:30which are easy sources of data to acquire,
05:33this is a bit more of a tricky in the current paradigm.
05:35I've done some back of the napkin calculations just for fun.
05:38And it's not as expensive as you might think.
05:41And it doesn't take as long as you might think.
05:43I think for under $50 million,
05:46which is in the venture capital world...
05:48Yeah, that's pocket change.
05:50Yeah, exactly.
05:51You could have in six months time,
05:53operating 100, 200 different headgear caps,
05:57people coming in in batches
05:58and have thousands of people going through.
06:01Now, of course, my cursory kind of calculations
06:04assumed a threshold you'd need to reach
06:06because we don't know how much data will confer the best result.
06:10So let's assume you reach that threshold,
06:11you get the funding.
06:13And in a year's time, you've done all the data,
06:15you've crunched all the data, your model's working.
06:17I can go out and buy a baseball cap
06:18and talk to my AI without having to speak out loud.
06:22Can I talk to someone else wearing the same baseball cap?
06:24And we were going to have full telepathy
06:26with the AI as a translator.
06:27So the answer to that is if you scale the data
06:30and if you scale the model
06:31and if you integrate it into a cap wearable,
06:33then yes, theoretically, it should work.
06:36It should work.
06:36There's no reason why that shouldn't work.
06:38So we could have telepathy potentially over the next five years.
06:40You could have telepathy.
06:41And that's what we were setting out to prove.
06:43So for example, I could wear headgear,
06:46think of a sentence such as, how are you today?
06:50That could be then sent through an AI model
06:53that takes the text and translates it into a voice
06:56and puts it into your ear as an AirPod,
06:58through an AirPod.
06:59And you can hear me thinking.
07:00They can respond with their thoughts.
07:01And you can respond back to my AirPod.
07:02So in theory, we're having a telepathic conversation.
07:05Neither of us are speaking,
07:06but we're using pre-trained sentences
07:08to have a back and forth dialogue,
07:10which we're both hearing.
07:11And now you've got AI models
07:13that can take a bit of your own voice
07:15and it can sound like you.
07:17So it would sound like Ryan when I'm hearing.
07:18It would sound like me when I'm talking to you.
07:20That raises an interesting point
07:21because that would potentially give the voiceless a voice
07:24because you could use a text to speech engine based on that
07:30and their thoughts could go directly to the voice engine
07:32rather than having to type it out.
07:34Exactly.
07:34Well, that was a lot of fun.
07:35It didn't work as expected,
07:37but it's an early research preview.
07:39This isn't a product
07:40they're going to be putting on the market tomorrow.
07:42However, it did give us a really interesting insight
07:45on what we might be using
07:46and how we might be interacting with AI and each other
07:49in the next few years.
07:51And I really hope it works
07:53because I do not want to be standing in the supermarket
07:57talking to myself
07:58when I'm just having a conversation with my AI.
08:01Fingers crossed.
08:02If you want to find out more about what's going on
08:04in the world of AI,
08:05find me on Tom's Guide
08:07or follow our socials at Tom's Guide.

Recommended