What is Artificial Intelligence ?
+#artificialintelligence #bakhabarsavera #arynews
+#artificialintelligence #bakhabarsavera #arynews
Category
🗞
NewsTranscript
00:00 We have with us today,
00:03 Mohammad Umair, R.A.P.H.D.
00:05 Artificial Intelligence Expert.
00:07 Thank you so much.
00:09 Please tell our viewers in simple words,
00:14 what is Artificial Intelligence in terms of rigging and all that propaganda stuff.
00:20 Right.
00:21 First, let me give a general idea of what Artificial Intelligence is.
00:25 Because, like normal intelligence, we humans can talk, see, extract information and present it.
00:33 If you start doing this work with computer software,
00:36 then it will be called Artificial Intelligence.
00:39 Like, a computer can talk,
00:41 a computer can give you information through chat,
00:44 or if I make a video at home,
00:46 or it can mimic something.
00:47 Yes, if I make a video,
00:48 if a computer software does this work,
00:51 that it makes a video of you and speaks your language,
00:55 then from human intelligence, you go to Artificial Intelligence.
00:59 So, in the era of development, the technological era,
01:03 you see, the misinformation used to be like rumors,
01:07 that you told someone,
01:08 because obviously, the communication channels were formed.
01:10 The wrong news spread like wildfire.
01:12 Yes, like wildfire.
01:13 Absolutely.
01:14 It was human to human communication.
01:16 After that, when you got phones,
01:18 then different propaganda tools started coming,
01:21 like TV was used,
01:23 it started happening through that.
01:24 Then social media came,
01:25 through which a lot of information,
01:27 a lot of ways were spread.
01:29 So, now, this current era is called the era of deepfakes,
01:33 in which what is happening is that
01:34 Artificial Intelligence has reached the level
01:36 that earlier, I used to make a video and lie,
01:38 or speak the truth,
01:39 or whatever point of view I used to present.
01:41 But now, by making my video,
01:44 and by putting my words instead of my words,
01:47 they are using those ideas.
01:48 We saw a demonstration of this,
01:49 during the Ukrainian war,
01:50 at the initial level,
01:51 a video of their president was sent,
01:53 made into a deepfake,
01:55 in which he is accepting defeat.
01:57 In the same way,
01:58 you must remember,
01:59 in Pakistan,
02:00 the EU Disinfo Lab,
02:01 which was disguised,
02:02 the entire Indian network.
02:04 Now, this system,
02:05 this one which is going to have an impact on elections,
02:08 why is it possible?
02:09 And obviously, when something like this comes,
02:11 initially, people don't understand it.
02:13 But later,
02:14 the tools to counter it are also used.
02:16 So, are governments or at the international level,
02:18 are they thinking about how to counter this?
02:21 Are we well equipped?
02:22 Yes.
02:23 The biggest problem is that
02:24 we have to understand that
02:25 AI technology has been so deeply built.
02:28 For example,
02:29 the way we make predictions,
02:31 for example,
02:32 I predict that this work will be done.
02:34 Now, my prediction is based on some information.
02:37 Exactly.
02:38 Now, if Facebook has the data of all the people in Pakistan,
02:42 then it can generally predict better what is happening.
02:45 It can do it faster through AI.
02:47 And Cambridge Analytica,
02:48 you may have heard of it,
02:50 it was an organization in London
02:52 which was banned.
02:53 It was banned in the elections of Brexit and Trump.
02:56 What it did was,
02:57 it scrapped the data
02:59 and said that these people are in favour of Hillary
03:01 and these people are in favour of Trump.
03:04 And these people in the middle,
03:05 they are still not sure about it.
03:07 So, they showed targeted ads
03:09 to show that they are pro-Trump.
03:12 Their surveys matter a lot.
03:13 It matters a lot.
03:14 So, if I continuously get ads
03:16 that Hillary Clinton is bad, bad, bad,
03:18 even though I may not be so much against her,
03:20 but if I get an ad on my Facebook every two minutes,
03:23 It is the same.
03:24 As we are told,
03:25 it is in your control to choose a company,
03:29 but you are not affected by it,
03:30 it is not in your control.
03:31 So, same goes for the social media.
03:33 How can we counter this?
03:34 The current situation is that
03:36 deepfakes cannot be countered by AI.
03:41 The current situation is that
03:42 it is still human dependency.
03:44 For example, if a deepfake comes,
03:46 we see what is its source.
03:48 We see if there are any lighting issues in it.
03:51 There are many things that come out while speaking.
03:54 Because deepfake has not reached that level yet.
03:56 But still, it is able to create so much effect.
03:58 Like you talked about the Russian president.
04:00 In Slovakia, a deepfake was launched
04:02 and a huge impact was created through it.
04:05 And in fact, it happens that in multiple places,
04:07 like for example,
04:08 we had an internet outage yesterday and the day before,
04:11 so, for example, you took the data,
04:14 and on the basis of that,
04:15 I want to say that I am not available there.
04:17 So, I want my video to come,
04:19 so that my followers can listen to it.
04:22 So, I can use deepfake to do that as well.
04:24 Kind of like,
04:25 someone else can use it,
04:27 and I can also use it.
04:28 There is another aspect to it.
04:29 I would like to know about it.
04:31 Because, see,
04:32 if a deepfake comes,
04:33 now, in that, you can counter it,
04:36 but how many people are sharing it?
04:38 How many people are getting affected by it?
04:40 Can something be done to stop its spread?
04:43 See, the problem with spread is that
04:45 in the age of social media and virality,
04:47 the biggest challenge is that
04:49 by the time you start stopping it,
04:51 it would have reached millions of WhatsApp groups by then.
04:54 It would have had millions of views.
04:55 On Twitter.
04:56 That is why it is very difficult.
04:57 Because there are so many platforms.
04:59 Exactly.
05:00 And social media has become an AI engine.
05:03 For example, if I had made an isolated video,
05:06 if it was not viral,
05:07 it would have been easier to catch it.
05:09 Then, should we do the work of China?
05:10 Should we go behind the firewall?
05:11 What do you think?
05:12 What happens is that countries stop the internet.
05:15 Why?
05:16 Because they say that
05:17 there is only one way to stop its virality.
05:19 Why?
05:20 Because, for example,
05:21 truth and lies will be discussed later, right?
05:22 That is it true or not?
05:23 But stopping the internet is also a solution.
05:25 It comes under human rights violation.
05:27 Right now, you don't have…
05:28 Access to information is a right of people.
05:30 I wanted to know one thing.
05:32 Isn't it better to make it transparent?
05:34 Like Elon Musk also talks about
05:36 regulating it.
05:38 Is it working on regulation?
05:40 See, it is working on regulations in the world.
05:42 In fact, there are elections in the US and UK.
05:44 So, there is a major discussion about this.
05:46 That regulate it.
05:47 The problem is that
05:48 you can regulate to a limit.
05:50 It is not easy to regulate the entire internet.
05:52 For example,
05:53 we regulate the news through media channels.
05:56 We know that fake news doesn't come from here.
05:58 But how will you regulate social media
06:00 which is being made by a person himself?
06:02 Some authentic sources should be there.
06:04 Because AI is all about data.
06:06 Who is feeding this data?
06:07 Aren't those authentic sources?
06:09 We can verify.
06:10 We can verify.
06:11 But then again,
06:12 the job of the government
06:13 or the FIA or different organizations
06:15 is to catch such deepfakes.
06:17 There is another aspect to this.
06:19 For example,
06:20 the question is being raised
06:21 that what should be the legislation?
06:23 The point is,
06:24 take the example of Britannia.
06:26 The existing laws of harassment
06:29 for normal people,
06:31 they have implemented the same on social media.
06:33 The same on digital media.
06:35 The same AI is used to manipulate people.
06:38 The same is done for them.
06:40 They are traced and tracked.
06:42 Will this pattern be followed all over the world?
06:44 Or will it be done in some other way?
06:46 No, it is being done.
06:47 They are trying their best.
06:48 But unfortunately,
06:49 they are not able to do it.
06:50 The reason is,
06:51 For example,
06:52 someone has sent me messages on WhatsApp.
06:54 I have made a copy of it
06:55 and screenshots of it
06:56 and given it to different places.
06:57 So, how will you verify that?
06:59 Unless you pick up that person,
07:01 then find out his history on WhatsApp,
07:02 then put him in jail,
07:03 check the internet,
07:04 find out old chats,
07:05 now it can be deleted.
07:06 Everything can be done.
07:07 So, that much...
07:08 That is very safe.
07:09 And it has been done for days.
07:11 It has become very safe.
07:12 So, it means that technology right now,
07:14 we do not have such good counter technology,
07:17 to be honest,
07:18 internationally as well,
07:19 to solve this problem.
07:20 Because, the governments are doing,
07:22 to increase digital literacy.
07:23 If you have a WhatsApp video,
07:25 then please do not trust it.
07:26 Yes, that is the most important thing.
07:28 Be patient,
07:29 do not give a reaction to it.
07:30 The problem is,
07:31 confirmation bias is a thing.
07:33 If my leader is saying this,
07:36 or something is happening,
07:37 or I do not like a person,
07:38 and he suddenly says something wrong,
07:40 then I will immediately share it.
07:41 Why?
07:42 Because I hate him anyway.
07:44 And whether he is true or false,
07:45 I will share it.
07:46 It can also happen that,
07:47 there are two contestants,
07:49 one says,
07:50 I am withdrawing for his sake,
07:53 and actually he is not doing it.
07:55 So, he can influence people.
07:56 If we look at it at a domestic level,
07:59 this can be very dangerous for families as well.
08:02 If someone's video is shared,
08:03 then that person will sit and explain to whom?
08:05 Who will he explain to?
08:06 Have you seen,
08:07 a video of a lawyer about deepfake,
08:09 came in India like this.
08:11 Then she was telling that,
08:12 this was not me,
08:13 and then deepfake was made.
08:14 But one thing I would like to say here,
08:15 there are many things,
08:16 it is a myth,
08:17 some people think,
08:18 that our mind can read,
08:19 it is not like that.
08:20 The data that you have provided,
08:22 whether you have done it through an app,
08:24 or shared your information,
08:25 that data is with them.
08:26 You must have heard,
08:27 that we were talking,
08:29 and suddenly an ad came on Facebook.
08:31 How did you know about Facebook?
08:32 Facebook has a mic on.
08:34 The mic is on,
08:35 and your voice is being recognized on Facebook.
08:37 Based on voice recognition,
08:38 it shows you ads.
08:40 So, if you go to the settings of your app,
08:42 and close it,
08:43 then it will be closed.
08:44 But one thing I have seen,
08:45 you think that,
08:46 I am surprised,
08:47 that in many places,
08:48 now that I have gained some knowledge about it,
08:50 I go and check their information,
08:53 and I see that,
08:54 they have access to my photos,
08:56 they have access to the mic,
08:58 they have access to the camera,
09:00 so I was thinking,
09:01 that these are the apps,
09:02 which apparently have no connection with this.
09:05 Right?
09:06 So, this is,
09:07 but there is one reason for this,
09:08 that in many places,
09:09 it happens that,
09:10 that app is also important,
09:13 if you close the access,
09:14 then that app will not work.
09:16 What should we do about that?
09:17 You can't do anything about that right now.
09:19 You can't do anything.
09:20 The problem is that,
09:21 What should we do with the phones,
09:22 if we put them on a tape machine?
09:23 Like Mark Zuckerberg does.
09:24 Regulation,
09:25 again, this is the state's job,
09:26 which apps it allows,
09:27 and which apps it doesn't allow.
09:28 Like you must have heard,
09:29 in China,
09:30 you can't use WhatsApp,
09:31 or multiple things.
09:32 They have a different version of TikTok.
09:33 So, ideally,
09:34 you should have the state,
09:36 to make regulations,
09:37 that,
09:38 like you must have heard,
09:39 the loan apps came in between,
09:40 in which so many people were scammed.
09:42 So, the state closed it,
09:44 so now you can't see it.
09:45 So, the state has to play an active role.
09:47 One more thing,
09:48 we were talking a lot on EV,
09:50 Electronic Voting Machines,
09:52 that it should have been there.
09:54 Do you think,
09:55 we have a mechanism of cyber security and safety,
09:57 which is not there.
09:59 So, if it was there,
10:01 could it have been hacked,
10:03 through cyber attacks?
10:05 See, there are multiple things.
10:06 First, they said they will do it offline,
10:08 if it is not connected to the internet,
10:10 then cyber attacks are comparatively less.
10:12 But, you make local networks.
10:13 But, the problem is,
10:14 when technology comes,
10:15 then vulnerability also comes.
10:17 Vulnerability means,
10:18 there is a possibility,
10:19 that if I connect to the internet via WhatsApp,
10:22 then any,
10:23 I mean, I opened a channel,
10:25 for hacking.
10:26 If that channel is opened,
10:27 then for example,
10:28 it also happens that,
10:29 if you use blockchain technology,
10:30 and use things,
10:31 then it can be a security,
10:34 but at the same time,
10:35 because,
10:36 for example,
10:37 I don't have the entire data of Pakistan,
10:39 but when electronic voting machine comes,
10:41 and all the data is digitized,
10:42 and everything is done,
10:43 then I have access also.
10:44 So, on the flip side,
10:46 if I really can hack,
10:48 or change things,
10:49 then I can change the entire election results,
10:51 with just one button.
10:53 Right?
10:54 So, you have to be careful,
10:55 that how to take the technology.
10:56 Because, one way or another,
10:57 I have to connect somewhere.
10:58 There are ways,
10:59 that there are multiple security layers,
11:01 multiple processes,
11:02 through which you strengthen.
11:04 It is not like,
11:05 for example,
11:06 earlier you must have heard,
11:07 that many IDs of Facebook used to be hacked easily,
11:08 now it is not happening.
11:09 Gmail used to be hacked,
11:10 now it is not happening.
11:11 So, they have done multiple verifications,
11:12 and added multiple verifications.
11:13 So, this is how you try to save technology,
11:15 from all these things.
11:16 Thank you so much,
11:17 Omayat.
11:18 I am very happy to have got information from you.
11:20 (speaking in foreign language)