• 6 months ago
The House Homeland Security Committee holds a hearing on artificial intelligence.

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript


Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Transcript
00:00:00 Without objection, the Chair may declare the committee in recess at any point.
00:00:03 The purpose of this hearing is to receive testimony from private sector stakeholders
00:00:07 relating to the opportunities and challenges presented by the emergence of artificial
00:00:13 intelligence and discuss how the Department of Homeland Security can develop and implement
00:00:17 certain AI technologies in support of the Homeland Security mission.
00:00:21 I now recognize myself for an opening statement.
00:00:28 In this era of rapidly advancing technology, I'm especially proud to live in a nation of innovators,
00:00:36 some of whom join us today. Today, American ingenuity is paving the way once again.
00:00:43 Artificial intelligence, or AI, promises to transform the global economy and national
00:00:47 security landscape as we know it. AI has the potential to create new jobs, catalyze productivity
00:00:53 in Americans' daily lives, and of course, protect our men and women in uniform and law enforcement.
00:00:58 Throughout this Congress, committees in both chambers have convened numerous hearings to
00:01:02 understand the countless opportunities and challenges AI presents. Like cybersecurity,
00:01:08 AI's impact is a complex and cross-cutting issue that cannot be handled by one jurisdiction alone.
00:01:15 Therefore, we're here to examine what I believe to be one of the most promising areas in which
00:01:21 to expand our use of AI, the security and defense of our homeland.
00:01:25 The Committee on Homeland Security has an oversight obligation to make sure we harness
00:01:30 AI technologies right. As with any new technology, AI presents new risks,
00:01:35 and we must take the time to understand them. This includes prioritizing safety and security
00:01:41 throughout AI development, deployment, and use. It also requires us to treat AI with appropriate
00:01:49 nuance so that we understand the impact of proposed regulatory measures on our businesses.
00:01:54 Today's full committee hearing follows up on a productive cybersecurity and infrastructure
00:01:59 protection subcommittee hearing led by Chairman Garbarino last December. The subcommittee
00:02:05 specifically considered the role of DHS and CISA in securing AI, a topic we will continue to explore
00:02:12 today. As that hearing reaffirmed, the threats facing our country are increasingly complex,
00:02:19 and DHS plays a critical role in keeping Americans safe and our country secure. DHS has a broad
00:02:26 mission and has explored and even implemented AI for specific purposes aligned with its unique
00:02:32 missions. For example, U.S. Customs and Border Protection has used AI-powered systems to monitor
00:02:39 border areas using drones and cameras, which help identify suspicious activity and unauthorized
00:02:45 crossings in real time. The Transportation Security Administration is currently examining
00:02:51 the ways in which AI can enhance its security screening, including using AI to augment its
00:02:57 X-ray imaging of travelers' carry-on luggage. TSA may soon look to AI algorithms,
00:03:05 and particularly facial recognition, powered by AI to identify security threats among the
00:03:12 traveling public and enhance the prescreening process. While these AI-powered systems offer
00:03:21 the promise of increased security and efficiency, they also bring significant risks that Congress
00:03:26 must carefully assess. For instance, AI-powered facial recognition systems capture and store
00:03:32 images of Americans and foreign travelers, which present substantial privacy concerns.
00:03:37 We must ensure that the use of AI-powered facial recognition by TSA is balanced with strong
00:03:44 protections of privacy, civil liberties, and ethical standards. Furthermore, U.S. Immigration
00:03:50 and Customs Enforcement is using AI to help identify and track illegal activities,
00:03:54 such as human trafficking and smuggling, by analyzing large datasets and detecting patterns.
00:04:01 And the Cybersecurity and Infrastructure Security Agency, CISA, is carefully examining the risks and
00:04:06 opportunities presented by AI and the way it can be leveraged to enhance our nation's resilience
00:04:11 against cyber threats. In the years ahead, CISA will play a critical role in addressing and
00:04:16 managing risks at the nexus of AI, cybersecurity, and critical infrastructure. Considering the
00:04:23 widespread push for AI adoption within DHS, it is critical that the Department collaborates
00:04:28 with Congress and with relevant stakeholders, including those from the private sector,
00:04:33 to manage AI's complexities and risks. In addition to the domestic concerns relating
00:04:39 to the emergence of AI, we must also consider the broader strategic implications.
00:04:44 Our nation's primary strategic adversary, the People's Republic of China, has made AI development
00:04:49 a national priority and is investing heavily in its research, talent, and infrastructure.
00:04:56 The Communist regime's aggressive pursuit of AI poses a significant challenge to the United States,
00:05:01 not only economically, but also in terms of our national security. In fact, DHS's 2024 Homeland
00:05:08 Threat Assessment warns that developing "malicious cyber actors have begun testing the capabilities
00:05:14 of AI-developed malware and AI-assisted software development, technologies that have the potential
00:05:20 to enable larger-scale, faster, efficient, and more evasive cyber attacks against targets,
00:05:26 including pipelines, railways, and other U.S. critical infrastructure." This is extremely
00:05:31 concerning. As complex as these threats are, our nation's efforts to combat them will be
00:05:37 even more challenging if our adversaries lead in AI research, development, and innovation.
00:05:42 For these reasons, it is important for Congress, DHS, and the private sector to work together
00:05:48 to ensure that we remain at the forefront of AI innovation while safeguarding our national
00:05:54 security, economic competitiveness, and civil liberties. Today, we will hear from a panel of
00:05:59 experts who will provide insights into the current state of AI for homeland security and the steps
00:06:03 that we can take to trust that the AI we use will be secure. To our witnesses, thank you for being
00:06:11 here today, for your efforts to educate members of this committee and the American people on how
00:06:16 we can responsibly advance AI innovation. I look forward to your testimony. I now recognize the
00:06:24 ranking member, the gentleman from Mississippi, Mr. Thompson, for his opening statement.
00:06:27 Thank you very much, Mr. Chairman. Good morning to our witnesses. I would like to thank you for
00:06:33 holding this important hearing on the intersection of artificial intelligence and homeland security.
00:06:39 Artificial intelligence is not new. The Department of Homeland Security and its components
00:06:46 have a long history of trying to understand how to most appropriately leverage the capacity AI
00:06:53 provides. The release of CHAT-GPT in November 2022 made clear AI's transformative potential,
00:07:03 and it accelerated efforts by the administration and Congress to ensure the United States
00:07:09 continued to lead the world on the responsible development and the use of AI. As we consider
00:07:17 how to deploy AI to better secure the homeland, we must keep three critical principles in mind.
00:07:23 First, we must ensure that AI models we use and the data to train them do not reinforce
00:07:32 existing biases. That requires that AI used by the government be developed pursuant to specific
00:07:39 policies designed to eliminate bias and is tested and retested to ensure it is not having that
00:07:47 effect. Eliminating bias from AI also requires a diverse AI workforce comprised of people from a
00:07:56 variety of backgrounds who can identify potential biases and prevent biases from being encoded into
00:08:04 the models. Second, the government must rigorously assess appropriate use cases for AI and ensure
00:08:12 that the deployment of AI will not jeopardize the civil rights, civil liberties, or privacies of the
00:08:19 public. Law enforcement and national security agencies in particular must implement an exacting
00:08:26 review of potential infringements on those fundamental democratic principles. Moreover,
00:08:34 it is essential that the workforce be included in decision-making processes on how AI will be
00:08:41 deployed. The workforce is the best is in the best position to understand capability gaps and where
00:08:50 AI can be affected. AI is also a tool the workforce will use to carry out their jobs
00:08:59 more effectively. It is not and should not ever be a replacement for people. Finally, the AI tools
00:09:07 we use must be secure. In many respects, existing cybersecurity principles can be adopted to secure
00:09:15 AI. I commend the Cybersecurity and Infrastructure Security Agency, commonly called CISA, for working
00:09:23 with the private sector to ensure the adoption of secure by design principles in the development of
00:09:30 AI. Moving forward, we must determine vulnerabilities unique to AI and work together
00:09:37 to address them. I commend President Biden on last year's executive order on AI, which put the
00:09:45 federal government on the path of developing and deploying AI in a manner consistent with these
00:09:52 principles. As DHS continues to assess how it will use AI to carry out its vast mission set,
00:10:01 from cybersecurity to disaster response to aviation security, I'm confident that it will do so in a
00:10:09 manner that incorporates feedback from the workforce and protects civil rights, civil liberties,
00:10:15 and privacy. We cannot allow our optimism about the benefits of AI to short circuit how we evaluate
00:10:23 this new technology. At the end of the day, bad technology is bad for security. As we consider
00:10:32 the potential benefits AI presents for DHS's mission, we must also consider the new threats
00:10:39 it proposes. AI in the hands of our adversaries can jeopardize the security of federal and critical
00:10:48 infrastructure networks, as well as the integrity of our elections. We know that China, Russia,
00:10:55 and Iran have spent the past four years honing their abilities to influence our elections,
00:11:01 sow discord among the American public, and undermine confidence in our election results.
00:11:09 Advances in AI will only make their job easier, so we must redouble our efforts to identify
00:11:17 manipulated content and empower the public to identify malicious foreign influence operations.
00:11:24 I look forward to a robust conversation about how the Department of Homeland Security
00:11:30 can use AI strategically to carry out its mission more effectively. I look forward to the witness's
00:11:37 testimony, Mr. Chairman, and I yield back the balance of my time. I want to thank the ranking
00:11:43 member for his comments. Other members of the committee are reminded that opening statements
00:11:47 may be submitted for the record. I'm pleased to have a distinguished panel of witnesses before
00:11:53 us today, and I ask that our witnesses please stand and raise your right hand.
00:11:58 Do you solemnly swear that the testimony you will give before the Committee on Homeland Security
00:12:06 of the United States House of Representatives will be the truth, the whole truth, and nothing
00:12:09 but the truth, so help you God? Let the record reflect that the witnesses have answered in
00:12:13 the affirmative. Thank you. Please be seated. I would now like to formally introduce our
00:12:18 witnesses. Mr. Troy Demmer is the co-founder and chief product officer of GECCO Robotics,
00:12:24 a company combining advanced robotics and AI-powered software to help ensure the availability,
00:12:29 reliability, and sustainability of critical infrastructure. Today, GECCO's innovative
00:12:34 climbing robots capture data from the real world that was never before accessible,
00:12:39 from pipelines, ship hulls, missile silos, and other critical asset types. GECCO's AI-driven
00:12:46 software platform then enables human experts to contextualize that data and translate it into
00:12:51 operational improvements. Mr. Michael Sikorski is Palo Alto Network's Unit 42's chief technology
00:12:57 officer and vice president of engineering. He has over 20 years of experience working on high-profile
00:13:03 incidents and leading research and development teams, including at Mandiant and National Security
00:13:07 Agency. Mr. Sikorski is also an adjunct assistant professor for computer science at Columbia
00:13:14 University. Mr. Ajay Amlani currently serves as the president and head of the Americans for I-Proof,
00:13:32 a global provider of biometric authentication products used for online enrollment and
00:13:36 verification. In addition, he's a well-recognized identity technology expert serving as a strategic
00:13:42 advisor for industry leaders working in artificial intelligence, identity technology, and e-commerce.
00:13:46 Mr. Jake LePerehoek is the deputy director of the Center for Democracy and Technology Security and
00:13:53 Surveillance Project. Prior to joining CDT, Jake worked as senior counsel at the Constitution
00:13:59 Project, a project on government oversight. He also previously served as a program fellow
00:14:05 at the Open Technology Institute and law clerk at the Senate Subcommittee on Privacy,
00:14:10 Technology, and the Law. Again, I thank our witnesses for being here,
00:14:14 and I now recognize Mr. Demer for five minutes to summarize his opening statement.
00:14:18 Good morning, Chairman Green, Ranking Member Thomas, Thompson, and members of the committee.
00:14:25 Thank you for the opportunity to join you today. My name is Troy Demer, and I am the co-founder of
00:14:30 Gecko Robotics, a Pittsburgh-based technology company that uses robots, software, and AI to
00:14:36 change how we understand the health and integrity of physical infrastructure. Back in 2013, Gecko
00:14:42 started with a problem. A power plant near where my co-founder and I went to college had to keep
00:14:47 shutting down due to critical assets failing. The problem was obvious from the first meeting
00:14:52 with the plant manager. They had almost no data. The data that had been collected was
00:14:56 collected very manually with a gauge reader and a single point sensor measurement. Furthermore,
00:15:03 that data was often collected by workers working off of ropes at elevated heights or in confined
00:15:09 spaces, resulting in few measurement readings. That meant the power plant had to make reactive
00:15:16 decisions rather than being proactive. That was the genesis of the idea behind Gecko,
00:15:21 using rapid technological advances in robots, software, and AI to get better data and generate
00:15:28 better outcomes. We do this by building wall-climbing robots armed with various sensor
00:15:32 payloads that can gather 1,000x more data at 10 times their speed compared to traditional methods.
00:15:39 We also have a software platform that takes that data and combines it with other data sets
00:15:44 to build a first-order understanding of the health of critical infrastructure.
00:15:48 Where are the vulnerabilities? What do we need to do to be proactive
00:15:51 before fixing it before a problem occurs? How do we prevent catastrophic disasters?
00:15:58 Those are the problems Gecko is solving today for some of the most critical infrastructure
00:16:03 that protects the American way of life. We're helping the Navy reduce their maintenance
00:16:07 backlog and build the next generation of military equipment smarter. We're helping the U.S. Air
00:16:11 Force create the digital baseline for the largest infrastructure program since the Eisenhower
00:16:17 Interstate Highway Project. And we're working with various other critical public and commercial
00:16:22 infrastructure projects across the country. In every one of these cases, the missing link
00:16:27 is the data. And that brings us to the conversation we're having in today's hearing.
00:16:30 AI models are only as good as the inputs they are trained on. Trustworthy AI requires trustworthy
00:16:37 data inputs. Data inputs that are auditable, interrogatable, data inputs that provide the
00:16:42 complete and undiluted answer to questions we are asking. When it comes to our critical
00:16:47 infrastructure, infrastructure that powers the American way of life and protects our homeland,
00:16:52 those data inputs for the most part do not exist. And without better data, even the most
00:16:56 sophisticated AI models will be at best ineffective and at worst harmful. Yet the way America collects
00:17:03 data on infrastructure today hasn't fundamentally changed in 50 years. Today, despite advances in
00:17:09 data collection technologies like robots, drones, fixed sensors, and smart probes that can detect
00:17:14 corrosion through layers of concrete, we are still largely gathering data manually, even on our most
00:17:19 critical infrastructure like dams, pipelines, bridges, power plants, railroads, and even
00:17:24 military assets. To give you one example from our work, we have one major national security partner
00:17:30 who collects data on critical assets both manually and with our robots. Their manual process collected
00:17:35 by handheld sensors and skateboards creates 3,000 individual data points. Our robots on that same
00:17:41 asset collect more than 8 million. That's more than 2,600 times the data, data that multiplies
00:17:47 the power of AI models and their predictive value. That's the scale of the difference between new
00:17:53 technology-driven processes and manual processes that we still largely rely on to secure our
00:17:58 critical infrastructure. Without better data collection, AI will never meet its potential
00:18:02 to secure the critical infrastructure that protects our homeland and the American way of life.
00:18:06 As I conclude, I want to touch briefly on how we think about robotics and AI in the workforce
00:18:12 construct. At GECCO, we talk a lot about upskilling the workforce. That's a big priority for many of
00:18:17 our partners. We've responded by hiring a significant number of former manual inspector
00:18:22 workers and then training them to operate our robots. More than 20% of our employees don't
00:18:27 have four-year degrees. Guys, workers who once hung from ropes at dangerous heights are now
00:18:33 operating robots. Workers who built their careers inspecting assets are now helping
00:18:37 our software developers build the tools to make that work better. Thank you again for the
00:18:42 opportunity to join you today and I look forward to your questions. Thank you, Mr. Demer. I now
00:18:48 recognize Mr. Sikorsky for five minutes to summarize his opening statement. Chairman Green,
00:18:53 Ranking Member Thompson, and distinguished members of the committee, thank you for the opportunity to
00:18:57 testify on the critical role that artificial intelligence plays in enhancing cybersecurity
00:19:03 defenses and securing the homeland. My name is Michael Sikorsky and I am the Chief Technology
00:19:08 Officer and Vice President of Engineering for Unit 42, which is the Threat Intelligence and
00:19:14 Incident Response Division of Palo Alto Networks. For those not familiar with Palo Alto Networks,
00:19:19 we are an American headquartered company founded in 2005 that has since become the global
00:19:25 cybersecurity leader. We support 95 of the Fortune 100, the U.S. federal government,
00:19:32 critical infrastructure operators, and a wide range of state and local partners. This means
00:19:38 that we have deep and broad visibility into the cyber threat landscape. We are committed to being
00:19:44 trusted national security partners with the federal government. As my written testimony outlines,
00:19:51 AI is central to realizing this commitment. We encourage everyone to embrace AI's transformative
00:19:58 impact for cyber defense. A reality of today's threat landscape is that adversaries are growing
00:20:04 increasingly sophisticated and AI will further amplify the scale and speed of their attacks.
00:20:11 However, this fact only heightens the importance of maximizing the substantial benefits that AI
00:20:16 offers for cyber defense. The AI impact for cyber defense is already very significant.
00:20:23 By leveraging precision AI, each day Palo Alto Networks detects 2.3 million unique attacks that
00:20:31 were not present the day before. This process of continuous discovery and analysis allows
00:20:38 threat detection to stay ahead of the adversary, blocking 11.3 billion total attacks each day
00:20:45 and remediating cloud threats 20 times faster. The bottom line is that AI makes security data
00:20:51 actionable for cyber defenders, giving them real-time visibility across their digital
00:20:56 enterprises and the ability to prevent, detect, and respond to cyber attacks quickly.
00:21:02 Accordingly, Palo Alto Networks firmly believes that to stop the bad guys from winning,
00:21:08 we must aggressively leverage AI for cyber defense. My written testimony highlights a
00:21:14 compelling use case of AI-powered cybersecurity that is showing notable results, upleveling and
00:21:21 modernizing the Security Operations Center, also known as the SOC. For too long, the cybersecurity
00:21:29 community's most precious resources, our people, have been inundated with alerts to triage manually.
00:21:36 This creates an inefficient game of whack-a-mole, while critical alerts are missed and vulnerabilities
00:21:42 remain exposed. We have seen transformative results from customer use of AI-powered SOCs.
00:21:49 This includes the following, a reduction of mean time to respond from two to three days
00:21:54 down to under two hours, a five times increase in incident closeout rates,
00:22:00 and a four times increase in the amount of security data ingested and analyzed each day.
00:22:04 Outcomes like these are necessary to stop threat actors before they can
00:22:10 encrypt systems or steal sensitive information. And none of this would be possible without AI.
00:22:16 This new AI-infused world we live in also necessitates what we like to call
00:22:22 secure AI by design. Organizations will need to, one, secure every step of the AI application
00:22:29 development lifecycle and supply chain, two, protect AI data from unauthorized access and
00:22:36 leakage at all times, and three, oversee employee AI usage to ensure compliance with internal policies.
00:22:43 These principles are aligned with and based on the security concepts already included in the NIST
00:22:51 AI risk management framework and should be promoted for ecosystem-wide benefit.
00:22:56 Now, the issues we're discussing today are important to me also on a personal level.
00:23:01 I'm honored to have spent decades both as a cybersecurity practitioner partnering with
00:23:06 governments to stop threats and as an educator training the cyber workforce of tomorrow.
00:23:13 It's with that background that I can say confidently that homeland security,
00:23:18 national security, and critical infrastructure resilience are being enhanced by AI-powered
00:23:24 cyber defense as we speak. And we must keep the pedal to the metal because our adversaries are
00:23:30 not, certainly not, sitting on their hands. Thank you again for the opportunity to testify,
00:23:35 and I look forward to your questions and continuing this important conversation.
00:23:39 Thank you, Mr. Sikorsky. I now recognize Mr. Omelany for his five minutes of opening statement.
00:23:46 Good morning, Chairman Green, Ranking Member Thompson, and members of the committee. My name
00:23:50 is A.J. Omelany, and I've been building innovative solutions to help organizations assure people's
00:23:54 identities for the last 20 years. I serve as President, Head of Americas at iProof.
00:23:59 I started my federal service as a White House Fellow and Senior Policy Advisor to Secretary
00:24:02 Tom Ridge, the first Secretary of Homeland Security, in the aftermath of the 9/11 attacks,
00:24:07 at a time in which the federal government was rethinking
00:24:13 how to manage its national security missions. And a large part of that included finding new
00:24:17 ways to benefit from the innovation happening in the private sector. For the past 20 years,
00:24:22 I have forged partnerships with the federal government and the commercial sector that
00:24:27 facilitate the utilization of commercial technology to augment national security initiatives.
00:24:32 Today, this committee is considering how to harness the power of AI as part of a
00:24:38 multi-layered defense against our adversaries. To best answer that question, we need to start
00:24:44 with understanding how AI enables threat actors. What capabilities can DHS and its component
00:24:49 agencies develop to combat these threats? What actions can the department take to better work
00:24:55 with industry as it promotes standards for AI adoption? AI exponentially increases the
00:25:02 capabilities and the speed to deploy new fraud and cyberattacks on the homeland.
00:25:07 They enable new threat technology developers to dramatically shorten their innovation cycles.
00:25:12 Ultimately, AI technologies are unique in the way that they upskill threat actors. The actors
00:25:19 themselves no longer have to be sophisticated. AI is democratizing the threat landscape by providing
00:25:26 any aspiring cybercriminal easy-to-use, advanced tools capable of achieving sophisticated outcomes.
00:25:33 The crime-as-a-service dark web is very affordable.
00:25:37 The only way to combat AI-based attacks is to harness the power of AI in our cybersecurity
00:25:44 strategies. At iProof, we developed AI-powered biometric solutions to answer a fundamental
00:25:52 question. How can we be sure of someone's identity? iProof is trusted by governments
00:25:57 and financial institutions globally to combat cybercrime by verifying that an individual is
00:26:02 not only the right person but also a real person. Our technology is monitored and enhanced by an
00:26:08 internal team of scientists specialized in computer vision, deep learning, and other
00:26:13 AI-focused technologies. Novel attacks are identified, investigated, and triaged in real
00:26:18 time, and technology enhancements are continuous. This combination of human experts and AI technology
00:26:25 is indispensable to harness AI in defending and securing the homeland. But equally important is
00:26:31 the need for AI-based security technologies to be inclusive and uphold privacy mandates by design.
00:26:37 DHS and its component agencies have prioritized transparency and accountability,
00:26:41 including performing routine self-assessments and collecting public input on matters of privacy
00:26:47 protection and limitations on data use. Those actions serve as a great model for how DHS and
00:26:52 other agencies should treat AI capabilities, both in regulating and promoting AI adoption.
00:26:58 The U.S. government has used biometrics in a growing number of programs over the past decade
00:27:02 to improve operational efficiency and traveler experience. With Gen AI, biometrics take on an
00:27:09 expanded role of helping to ensure that someone is who they claim to be in digital ecosystems.
00:27:13 For example, deepfakes and synthetic identities have recently become so realistic that they are
00:27:19 imperceivable to the human eye. Because of this, biometric verification plays a critical role in
00:27:26 the nation's security posture. To best assist DHS and its components, Congress should support the
00:27:32 creation of more useful standards for systems and testing and give access to the best talent
00:27:37 developing new technology tools with the agility necessary to respond to the changing threat
00:27:42 landscape. The Silicon Valley Innovation Program is a very powerful model for both acquiring the
00:27:47 expertise of the nation's best engineering minds while also creating a collaborative test bed for
00:27:52 providing proving new technologies. iProve has worked with S&T in all phases of the SVIP program
00:27:58 and can testify firsthand to the powerful impact that this program could have if expanded to scale
00:28:04 with a broader group of stakeholders. Another example, the Maryland Biometric Test Facility,
00:28:09 could be expanded upon to incorporate a wider range of perspectives as biometric technologies
00:28:14 work to address future threats. In conclusion, we at iProve are completely focused on pioneering
00:28:19 capabilities which can counter identity fraud while collaborating with federal stakeholders
00:28:24 to advance innovation. We seek to play a constructive role in AI practices and hope
00:28:30 the committee will see us as a resource as you consider a path forward. Thank you. I look forward
00:28:36 to your questions. Thank you, Mr. Amlani. I now recognize Mr. LaPeroucque for five minutes to
00:28:43 summarize his opening statement. Chairman Green, Ranking Member Thompson, and members of the
00:28:49 House Homeland Security Committee, thank you for inviting me to testify on the important topic of
00:28:54 artificial intelligence and how we can ensure that its use aids America's national security
00:28:59 as well as our values as a democracy. I'm Jake LaPeroucque, Deputy Director of the Security and
00:29:04 Surveillance Project at the Center for Democracy and Technology. CDT is a non-profit, non-partisan
00:29:10 organization that defends civil rights and civil liberties in the digital age. We've worked for
00:29:15 nearly three decades to ensure that rapid technological advances such as AI promote our
00:29:20 core values as a democratic society. AI technology can only provide security if they are used in a
00:29:26 responsible manner and, as Chairman Green said, treated with appropriate nuance. This is not only
00:29:32 critical for keeping America safe, it is also necessary for protecting our constitutional
00:29:36 values. Today I'd like to offer a set of principles for the responsible use of AI
00:29:41 as well as policy recommendations to promote such use in the national security space.
00:29:45 We must be wary that for AI technologies, garbage in will lead to garbage out. Too often AI is
00:29:53 treated as a sorcerer's stone that can turn lead into gold, but in reality AI only performs as well
00:29:58 as the data that it is given. Reckless deployment of AI technologies, such as using input data that
00:30:04 is low quality or well beyond the balance of what any given system was designed to analyze,
00:30:08 will yield bad results. In the national security space this could have dire consequences,
00:30:14 wasted resources, leading investigations astray, or triggering false alarms that leave genuine
00:30:19 threats unattended to. Ensuring that AI is used responsibly is also critical to protecting our
00:30:25 values as a democracy. AI is often framed as an arms race, especially in terms of national security,
00:30:31 but we must take care in what we're racing towards. Authoritarian regimes in China, Russia,
00:30:37 and Iran have shown how AI technologies such as facial recognition can throttle dissent,
00:30:42 oppress marginalized groups, and supercharge surveillance. The United States must not use AI
00:30:47 so callously. Truly winning the AI arms race does not simply mean the fastest buildup on the
00:30:52 broadest scale. It requires uses that uphold civil rights and civil liberties.
00:30:58 As Ranking Member Thompson highlighted, responsible use requires exercising care from
00:31:02 creation to input of data into AI systems to the use of the results from those systems.
00:31:07 To facilitate responsible use, government applications of AI should be centered on
00:31:12 the following principles. AI should be built upon proper training data. It should be subject
00:31:18 to independent testing. It should be deployed within the parameters that the technology was
00:31:22 designed for. It should be used by specially trained staff and corroborated by human review.
00:31:27 It should be subject to strong internal governance mechanisms. It should be bound
00:31:31 by safeguards to protect constitutional values. And it should be regulated by institutional
00:31:36 mechanisms for transparency and oversight. Although the degree of secrecy in national
00:31:41 security programs make upholding these principles especially challenging, we can and must find ways
00:31:46 of promoting responsible use of AI. CDT proposes two policies in furtherance of this goal.
00:31:53 First, Congress should establish an oversight board for the use of AI in national security
00:31:58 contexts. This board would be a bipartisan independent entity within the executive branch,
00:32:04 with members and staff given access to all use of AI within the national security sphere.
00:32:08 The board would act as an overseer within classified settings to promote responsible
00:32:12 use of AI. So this would support both compliance of existing rules as well as lead to improved
00:32:18 practices. The board's role would also allow for greater public knowledge and engagement.
00:32:23 This would serve as a conduit for harnessing outside expertise and building public trust in
00:32:28 government's ongoing use of AI. The Privacy and Civil Liberties Oversight Board has demonstrated
00:32:34 how effective this model can be. That board's role in counterterrorism oversight has enhanced
00:32:38 public awareness and improved public policy in a manner that's aided both security and civil
00:32:43 liberties alike. A new board focused on the use of AI in the national security realm would be
00:32:48 similarly beneficial. Second, Congress should enact requirements to enhance transparency for
00:32:54 the use of AI. This should include required declassification review of key documents,
00:32:59 such as AI impact assessments and privacy impact assessments. It should also require
00:33:04 annual public reporting and information such as the full set of AI technologies that agencies
00:33:08 deploy, the number of individuals who are impacted by their use, and the nature of that impact.
00:33:14 While we support prompt adoption of these important measures, AI technologies are far
00:33:18 too wide-ranging and complex to be solved by a silver bullet. Promoting responsible use of AI
00:33:23 will require continual review, engagement, and adaptation. This work should be done in consultation
00:33:29 with a broad set of stakeholders, impact communities, and experts. Thank you for your
00:33:34 time and I look forward to your questions. Thank you, sir. Members will be recognized by order of
00:33:39 seniority for their five minutes of questioning. An additional round of questioning may be called
00:33:43 after all members have been recognized. I now recognize myself for five minutes of questioning.
00:33:48 Mr. Amlani, in the evolution of AI, would you discuss your perception or opinion on
00:33:56 whether or not you think as the threat evolves, we're evolving ahead of the threat?
00:34:01 Thank you, Mr. Chairman, for the question. This is a very important component of the work that
00:34:11 we do at iProof. At iProof, we have a central operations center where we monitor all the
00:34:16 threats happening globally. We have deployments worldwide, Singapore, UK, Australia, Latin America,
00:34:22 Africa, and the United States. And we're constantly using our human people there with
00:34:27 PhDs to be able to assess the threats that are actually trying to attack biometric systems
00:34:33 globally. We're migrating and adapting our AI technology to stay multiple steps ahead of the
00:34:38 adversary. I think this is very critical as we look at AI technology to be able to build in steps
00:34:43 to continue to stay multiple steps ahead of the adversaries and understanding what attacks are
00:34:47 actually happening globally so that we can continue to modify our systems.
00:34:50 Thank you for that. I'm glad to hear that. It makes everybody sigh a little bit,
00:34:57 or relief. I want to shift the subject a little bit because we're talking about AI as a tool and
00:35:07 all the positives and then the challenges of it when it's on the attack side. But I want to ask
00:35:12 something about workforce. What do we need to do uniquely? And this, Mr. Sikorsky, would really,
00:35:18 I think, be for you since your company employs a very large number of individuals in this fight.
00:35:24 How do we need to be thinking about developing the workforce for AI as well as cybersecurity
00:35:32 in itself? Yeah, that's a great question, Congressman. I think it's super imperative.
00:35:39 Myself, I've trained cyber agents for the FBI at Quantico, Department of Energy employees,
00:35:46 private companies, and then I've been teaching at the university level for over a decade.
00:35:51 And firsthand, I've seen that we really need to focus in as AI, cybersecurity, it's here to stay,
00:35:59 and we need to really make strides to continue to push down, get people trained up, our workforce,
00:36:05 our children, get them learning these concepts very early on is super important. And then from
00:36:12 a Palo Alto Networks perspective, we're very invested in this as well. So as far as the Unit
00:36:18 42 team goes within Palo Alto Networks, we have what's called the Unit 42 Academy, which takes
00:36:24 undergraduates, gives them internships during the college years, hands on training, then they come
00:36:30 to work with us as a class, like a cohort where they learn a lot of up leveling skills, and their
00:36:36 first year is spent just engaged in learning and growing. And I'm proud to say that 80%
00:36:40 of the Unit 42 Academy in the last year was actually made up of women.
00:36:50 Do me a favor and let's go just a little bit more granular on this question. What are the skills,
00:36:55 specific skills that these individuals need to have? What are we teaching them? I get,
00:37:03 you know, the coding and all that stuff, but what specifically do you need to see in a candidate
00:37:11 that you would want to hire? Yeah, so number one is having a background and understanding how to
00:37:18 engage these technologies. We've spent a lot of time building technologies over the last 10, 20
00:37:25 years in the cybersecurity industry. So knowledge of how they work, what they give them access to,
00:37:30 and how they can leverage them to fight against, you know, evil that's coming into these networks.
00:37:37 I also think about things like understanding of how computer forensics works, malware analysis,
00:37:44 an ability to really dive deep into the technical details so that they could dig out and sift
00:37:50 through the noise. And it also comes into play with AI. Do they have an understanding of AI and
00:37:56 how it is being leveraged in the product to sift through that noise for them? Because one thing we
00:38:00 deal with heavily is these analysts are inundated with just too many alerts from all these products
00:38:06 firing. And in those alerts is what the actual bad stuff that's happening. And so by leveraging AI,
00:38:13 you sift through that noise, bubbles up things for them to actually dive deep into what's really
00:38:18 there. And the essential is the AI is now fighting the AI, basically, right? I mean,
00:38:23 the machines are fighting the machines. Am I getting this right? To put it in simplest terms?
00:38:29 I think to some extent, I think there's definitely that kind of thing going on for sure. But at the
00:38:36 end of the day, the cyber analyst is being up leveled in their ability to fight back.
00:38:41 Yeah, we've got to figure out we've got some legislation coming out pretty soon on workforce
00:38:46 development. And I want to just make sure that the AI piece is captured in that. So thank you.
00:38:52 My time has expired. I now recognize the ranking member for his five minutes questioning.
00:38:56 Thank you, Mr. Chairman. You know, one of the things that we've become accustomed to is
00:39:02 if we're filling out something online, as we complete it, they want to figure out if you're
00:39:10 a robot, or if you're a human. And so you got to figure out now, is this a bus? Is this a car? Is
00:39:20 this a light? So all of us are kind of trying to figure this out. So you gentlemen have given us
00:39:28 some comfort that there are some things out here. But let me just give you a hypothetical question.
00:39:37 As we look at this, how do you see the role of government as AI develops? And what do we need
00:39:50 to do to ensure the public that that development does not allow our adversaries to become
00:40:05 even more intimate in what it is?
00:40:07 We'll start with Mr. Deming.
00:40:11 I thank you, Mr. Ranking Member for the question. I agree the country needs to prioritize building AI
00:40:24 that is safe with the executive order that, you know, where I stand is that we need to collect
00:40:33 highly accurate data that ultimately informs these models. And increasingly, what I think can be done
00:40:40 is to help create testbeds for helping to validate those models and creating the training data sets
00:40:48 that enable, you know, good AI to be created, leveraging only good data sets. And so that's my
00:40:56 position. I think one thing I think of is my history in cybersecurity. We rushed as innovators
00:41:05 to get the Internet out, and we didn't build that while thinking about security and look at the spot
00:41:10 we're in. I think that's why it's really important that we build AI and have it secure by design.
00:41:17 And it's one of these things where it falls into a few different categories, making sure that the
00:41:23 AI that we're building, everybody's rushing to get this technology into their products and out
00:41:27 to consumers. But we need to think about it as we build it. For example, what is the application
00:41:33 development lifecycle? What is the supply chain of that being built? And is that secure? Thinking
00:41:39 about these things as they're running in real time, how are we protecting those applications
00:41:44 so that they don't get manipulated? How about the models we're building? How about the data? Is that,
00:41:49 you know, subject to manipulation? How are we protecting that? And then how do we monitor it?
00:41:54 How do we monitor our employees that are all probably using this today without even our
00:41:58 knowledge? So those are all areas where I would focus. In addition, when building the Internet,
00:42:06 identity was actually not a layer that was originally thought about. So the challenge
00:42:10 that you just described about the changes of buses, I just encountered seaplanes versus normal planes.
00:42:17 And it's very difficult to try to decipher between what's a seaplane and a normal plane in terms of
00:42:20 a CAPTCHA. That's the field it's called. Standards and testing is a very important component here. I
00:42:25 think we need to continue and constantly test all the tools to make sure that they're inclusive,
00:42:30 and making sure that they're accurate. Standards are another important component here that comes
00:42:35 out of testing, but is also very focused on leveraging in organizations like NIST, and
00:42:40 continuing to invest in organizations like NIST. Talent development is the other component that I
00:42:44 would heavily focus on. And much of that resides in the private sector and partnerships with private
00:42:49 sector companies. At Defense Innovation Unit, we surveyed the top 25 engineering schools about
00:42:54 where they wanted to work after they graduated. And there was no government agency other than NASA
00:42:59 on that whole list. There was no defense or a government contractor on that whole list,
00:43:03 other than SpaceX. And so as we start to think about this, how do we actually get access to
00:43:09 the top engineers across society? And that is actually through partnerships with commercial
00:43:13 world. Thank you. Yeah, I would echo several of what's been said already. We need well-trained
00:43:21 systems. We need high standards for procurement to make sure that we're using good systems. We
00:43:26 need proper data inputs to be going into AI systems, and proper treatment of what's coming
00:43:30 out with human review. I would also emphasize that beating our adversaries in this field means that
00:43:35 we do not end up imitating our adversaries. Right now, facial recognition, as an example,
00:43:40 I've already harped on, that is used in a frightening way in regimes like China, Russia,
00:43:47 Iran. Right now, federally, we do not have any regulations on law enforcement use of facial
00:43:51 recognition. And although the cases are limited, there are document cases of it being used against
00:43:56 peaceful protesters in the United States. That's a type of standard that we should be prohibiting.
00:44:00 Thank you very much. Mr. Chairman, maybe I'll submit for the record. We have elections coming
00:44:10 up in November. The last time there was some involvement by Russia, China, specifically,
00:44:20 with our elections. Are you in a position to say whether or not
00:44:27 we are robust enough to defend against our adversaries for elections? Or are you to
00:44:36 encourage us to be a little more attentive to any aspect of our elections?
00:44:43 That's a great question. I think that certainly generative AI makes it easier for malicious
00:44:56 actors to actually come after us in that way. We've actually already seen them in the cyber arena
00:45:01 start to build more efficient phishing emails. So things like typos, grammar mistakes, all that
00:45:06 kind of stuff, that's a thing of the past. That won't be something we encounter anymore.
00:45:11 In other words, it won't be any more typos. Right. And they could also read someone's inbox
00:45:19 and then talk like that individual. Right. They could leverage it in that way. I do think that
00:45:24 CISA, we're a member of the JCDC, and they're taking election security as a big priority for
00:45:31 them. And so we're assisting in those ways. And I think that it's definitely super concerning
00:45:38 and something we need to lean into with the election cycle coming up.
00:45:41 Anyone else want to address that?
00:45:45 Good question. I'll let the gentleman's time continue. Anybody else wants to answer?
00:45:53 I think from an identity perspective, this also is very important with regards to who
00:45:59 is it that's actually posting online and being able to discuss. So from an identity perspective,
00:46:06 making sure that it's the right person, it's a real person that's actually posting and
00:46:12 communicating and making sure that that person is in fact right there at that time is a very
00:46:16 important component to make sure that we know who it is that's actually generating content online.
00:46:21 There is no identity layer to the internet currently today. We have a lot of work that's
00:46:25 being done on digital credentials here in the United States. Our country is one of the only
00:46:31 in the Western world that doesn't have a digital identity strategy. We had some work that's
00:46:36 actually been done in the National Cyber Security Strategy, Section 4.5, but it hasn't really been
00:46:41 expanded upon. And I think that's some work that we should think about encountering and doing.
00:46:46 I have some questions on that too that I might submit in writing because this digital identification
00:46:54 thing is, you know, as banking and all of that goes into the wallet on the phone,
00:47:00 it's this digital ID is a critical issue. So I'll send some questions too.
00:47:04 Thank you, Ranking Member. I now recognize Mr. Higgins, a gentleman from Louisiana,
00:47:11 for his five minutes questioning. Thank you, Mr. Chairman. Mr. Chairman,
00:47:16 I've worked extensively with my colleagues on the House Oversight Committee regarding
00:47:22 artificial intelligence. And I'm not necessarily opposed to the emerging technology, even if I were.
00:47:31 It'd be like opposing the incoming tide of the ocean. It's happening. I think it's important
00:47:39 that Congress provides a framework so that AI cannot be leveraged in any manner that's contrary
00:47:49 to Americans' individual liberties, rights, and freedoms. I introduced the Transparent Automated
00:47:57 Governance Act, the TAG Act, as a first step to set limits on government application of artificial
00:48:04 intelligence. As a whole, the bill seeks to ensure federal agencies notify individuals
00:48:11 when they're interacting with or subject to decisions made using AI or other automated
00:48:19 systems and directs federal agencies to establish a human review appeal process to ensure
00:48:26 that human beings have supervision of AI-generated decisions that can impact the lives
00:48:39 of Americans, and specifically our freedoms. So I have concerns about this tech, but we may as well
00:48:49 embrace it. Because there's, I think it's crucial that America lead in the emergence of AI
00:49:01 technologies and how it interfaces with our daily lives. And may I say we could have hours and hours
00:49:13 of discussion about this, but I have five minutes. So I'm going to ask you gentlemen,
00:49:20 regarding the use of AI as it could contribute towards more effective counterintelligence or
00:49:29 criminal investigations, as those criminal investigations and counterintelligence efforts
00:49:36 relate to Homeland Security. Trying to focus in on this committee, our committee here,
00:49:42 and our responsibilities. What are your thoughts, gentlemen, on how we can best deploy AI
00:49:53 with our existing law enforcement endeavors for border security and criminal investigations
00:50:05 that result from our effort to secure and re-secure our border? Mr. Sikorsky, I'd like to start with you.
00:50:12 Yeah, that's a great question. I think one thing I look towards is the way we're already
00:50:20 leveraging AI for cyber defenses. As I spoke about in my statement, we've been taking really
00:50:27 large amounts of data and distilling it into very few actionable items that a human... You say taking
00:50:33 large amounts of data, where are you getting that data from? So for us as a cyber security company,
00:50:39 we are focused on processing security data, which means ones and zeros, the malware that is found
00:50:44 on the systems, the vulnerability enumeration of those systems, low-level operating system
00:50:50 information that now enables us to tease out what's the actual threat, which is very distinct from...
00:50:57 So you're drawing your raw data from open sources? How are you accessing the raw data that you're analyzing?
00:51:06 Yeah, so that's a great question. Some of the data that we're getting is from
00:51:11 the customer collection that we have in our products that are spread throughout the world.
00:51:14 From the Fortune 100, you said your company works with 95 of the top 100? That's right.
00:51:23 Okay, so those companies have agreements with their users. That's that part that we don't read.
00:51:29 As it comes up on your phone and you renew the agreement and nobody reads it, we go to the
00:51:36 bottom and click yes. So in the 95 companies, there's agreements to share that data. You're
00:51:42 using that to analyze through AI for the purposes of what? Yeah, in order to find new attacks.
00:51:53 So one of the things we're doing is it's firewall data, network level telemetry, host system level
00:51:58 telemetry. It's very different than, you know, say personal information or something like that.
00:52:03 And instead, we're focused on that lower level data, bringing that together and then leveraging
00:52:07 AI to say, how is that network attack related to what's happening on the actual computer,
00:52:13 bringing that together quickly so that the analyst can find the threat and eliminate it very quickly.
00:52:18 And you're specifically referring to cyber security. But, Mr. Chairman, my time has expired.
00:52:24 I have many questions I'm going to submit to these gentlemen in writing. And thank you for
00:52:28 convening this hearing. It's very important. Absolutely. Thank you. I now recognize Mr.
00:52:33 Carter, the gentleman from Louisiana. Thank you, Mr. Chairman. And thank you to the witnesses for
00:52:39 joining us today. Advancements in AI technology as it demands for its use possesses significant
00:52:48 challenges. We must mitigate the associated risk and threats by leveraging AI to improve our
00:52:53 national security. I urge my colleagues to support my bill, H.R. 8348, the CISA Securing A1
00:53:00 Task Force Act, which proposes the creation of a dedicated task force within CISA. This task
00:53:08 force will focus on addressing safety, security and accountability challenges posed by AI.
00:53:14 Today's discussion is crucial for the American people as we work to tackle these pressing issues.
00:53:21 AI is not new. We know that. It's relatively new to the general public. And some of its
00:53:27 applications have enormous value. Some of them can be quite frightening. National security,
00:53:34 obviously, is major. So I'd like to ask each of you to take a crack at answering the questions
00:53:40 relative to civil rights, civil liberties and privacies in general for the American people.
00:53:48 How is this contemplated as we develop further delving into AI?
00:53:55 Mr. Demer? Well, I'd defer to my fellow witnesses on this issue, given their expertise,
00:54:04 but it is certainly worth a spirited conversation about how we balance national security and civil
00:54:08 liberties. What I can say is GECO is focused on building the data sets on the most critical
00:54:13 infrastructure in a way that promotes national security. So I'm a cybersecurity practitioner,
00:54:20 as I said, so I'm not, you know, don't know all the ins and outs of the policy itself. But my
00:54:26 sense is that when we think about AI and regulation, we got to think of the purpose.
00:54:30 So defining what is high risk, right? And then saying, you know, what are the AI use cases that
00:54:36 are actually high risk and focusing on the unique security requirements for them. On the cybersecurity
00:54:43 side, I think that security data, those ones and zeros I was talking about earlier, are a little
00:54:47 bit not high risk compared to a lot of other technologies that are being rolled out.
00:54:52 At iProve, we take inclusivity by design very seriously. We do regular self-assessments on
00:55:03 our own technology. We work with every single independent assessment organization that's out
00:55:09 there. We're encouraging more independent assessment organizations. We're encouraging
00:55:13 them to stay ahead of the technology. Many times these independent organizations focus on
00:55:18 threats or challenges in the past. We need to stay up to speed. We need to stay above and go beyond.
00:55:25 We build inclusivity by design, as I mentioned. That includes obviously skin tone, but also
00:55:30 cognitive and physical capabilities, making sure that those are taken into consideration and
00:55:34 inclusivity. Socioeconomic class as well. Many technology tools are expensive to be able to
00:55:39 attain. I just got the iPhone 15 Pro, and I can attest to that. Age is also a very important
00:55:47 component, as well as gender. And so making sure that all of those different characteristics of
00:55:52 inclusivity are also embedded into the design of AI is an extremely important component of success.
00:56:01 So there certainly is a range. Some systems, like for example, damage assessment systems that DHS
00:56:07 uses present lower risks, maybe no real risks to civil rights and civil liberties. Other technologies
00:56:13 such as facial recognition, mobile device analytics, or automated targeting systems,
00:56:18 all of which DHS employs, present very significant risks to civil rights and civil liberties.
00:56:22 The OMB memorandum on AI puts a good stress on care for civil rights and civil liberties. We
00:56:30 hope that's a good step towards development of strong internal guidelines within agencies.
00:56:34 But this is an area where transparency and oversight are essential to evaluating the
00:56:39 effectiveness of those rules, evaluating whether more steps are needed, and as necessary,
00:56:45 prompting Congress to step in and make their own rules. As I mentioned, in the field of facial
00:56:50 recognition, there's no current federal laws or regulations for law enforcement use. We think
00:56:56 that's something that needs to change. So the information out is only as good as the information
00:57:01 in. So the data becomes paramount. So how do we take into account cultural nuances, natural biases,
00:57:09 so that we're not replicating biases that humans have, and then become a part of the bias
00:57:16 that's in AI? Mr. Amelani? I approve we have a worldwide deployment. And so we operate in
00:57:27 Singapore, we operate in Australia, in the UK, in South Africa, in many African countries,
00:57:34 as well as Latin American countries, as well as North America. We take very careful consideration
00:57:39 of basically making sure that we're seeing all of the threats coming in from all those locations,
00:57:43 but also taking into account all of the data and making sure that we have a representative database
00:57:48 that we train on. We go over and above when it comes to making sure that our training data is,
00:57:53 in fact, representative of the overall population. And I would encourage you to be able to include
00:57:59 standards and testing and invest in those to make sure that other providers are also doing the same.
00:58:04 So from my standpoint, obviously, as you've heard from the questions here, you'll continue to hear
00:58:11 that Congress has real concern on making sure that we learn from the, mistakes may not be the
00:58:17 right word, but we learn from how fast the internet came, Facebook, Instagram, and all
00:58:24 of those things, and they're constantly bringing about new challenges, the good, the bad, and the
00:58:29 ugly. How do we learn from that to make sure that we are managing this AI as it unfolds and becomes
00:58:37 more prevalent in our society? And I realize that my time has expired. So, Mr. Chairman,
00:58:41 if you would allow them to answer that, I will yield. Joe, I'm going to answer the question.
00:58:46 Sure. That's directed at me, I assume? Yes, or anyone else who cares to take a crack at it.
00:58:52 Thank you, Congressman. Staying on top of all of the different technologies is very important in
00:58:58 making sure that we have independent organizations within the federal government that can be,
00:59:03 that can have different companies submit their AI to testing and making sure that we have the right
00:59:10 people staffed within those organizations who can stay on top of all of the latest opportunities and
00:59:15 threats in the market. Yes, there is a lot of interest that this overall industry has in making
00:59:24 sure that basically it's well-represented databases and common standards that we can all adhere to.
00:59:30 And I think making sure that those accurate solutions can be in front of the customers
00:59:33 for biometric verification is also a very important component. Biometric verification
00:59:38 is also something that's very different than biometric recognition. And I want to make sure
00:59:43 that we can call out the two differences between the two. The gentleman's time has expired. The
00:59:50 chair has been flexible and generous with time, but I ask all members to watch the clock. The
00:59:56 gentleman from Mississippi, Mr. Guest, is recognized for five minutes for questioning.
01:00:01 Thank you, Mr. Chairman. I want to thank all of our guests for joining us today.
01:00:05 Have we seen the incredible growth of AI? We know that growth can, that AI can be used to gather and
01:00:13 analyze data. We know that AI has both offensive and defensive capabilities and with all technology
01:00:20 that AI can be used for good and evil. But specifically as we drill down today on homeland
01:00:27 security and looking at the role that AI is playing in homeland security, I think it's easy
01:00:34 for many of us to see the role that AI plays in cyber security. And we've heard testimony about
01:00:41 the offensive and defensive capabilities of AI in the cyber world. But I want to talk a little bit
01:00:49 about the role that AI may play in securing our physical borders. One of the things that this
01:00:55 committee has been focused on is trying to secure our southwest border. We know that last year there
01:01:04 were 3.2 million encounters on the southwest border, a record number of encounters that were
01:01:09 on track to break that record again this year. We know from statements that the Secretary of
01:01:16 Homeland Security has been reported that he's made that 85% of those individuals encountered
01:01:22 are at some point released into the interior. And so my question is, how can we use AI to better
01:01:33 identify the individuals that are being encountered? Because I have a great fear that we are not
01:01:41 identifying and properly vetting those individuals before they are released into the interior.
01:01:48 And I know, and I'll start with you, Mr. Amlani, you talk a little bit in your written testimony
01:01:56 about biometrics and the use of biometrics and the Maryland biometrics test and their facility
01:02:02 and the things that y'all are doing. And so I would ask maybe first if you could start off,
01:02:07 and then if anyone else would like to join in, how can we within the Department of Homeland Security
01:02:15 better use AI to identify the numerous individuals that we are encountering on a daily basis along
01:02:23 the southwest border so that we're not allowing people into the country that would cause ill will,
01:02:30 people who may have criminal records, criminal backgrounds. We see people all the time who are
01:02:34 apprehended that they may have a tie to some sort of terrorist organization or individuals who have
01:02:40 previously been arrested and convicted of violent crimes in other countries. And so how can we use
01:02:47 AI to better vet those individuals to do so in a more timely fashion before they're released?
01:02:53 And so I'll start with you, allow you to kick that off, and then would ask anyone else who
01:02:59 would like to join in to please continue in this discussion. Thank you, Congressman,
01:03:07 for the question. This is a very important question in my mind. We at iProof obviously
01:03:12 cannot speak on behalf of DHS, but I can speak on behalf of my experience in 2003 at the Department
01:03:19 of Homeland Security, originally launching new technologies like biometrics at the border.
01:03:23 Secretary Tom Ridge did assess biometrics as one of the core technologies to be able to improve the
01:03:28 department's capabilities, improving throughput as well as being able to improve security at the
01:03:32 same time. My personal experience with biometrics was actually introduced to me first in 2003,
01:03:37 U.S. visit program when it was rolled out at airports for the first time for people coming
01:03:41 into the country that were not citizens. We actually used fingerprint technology at the
01:03:45 borders, and it was very eye-opening for me, for people walking up to a Customs and Border Patrol
01:03:51 agent who had a look of fear in their eyes, about to be asked a significant set of intrusive
01:03:56 questions and disclosing a lot of private information, who then put their fingerprint
01:04:00 down on the device and to have the Customs and Border Patrol agent say, "Welcome to the United
01:04:04 States." That sole interaction for me was something that lit a fire within me over the last 20 years
01:04:13 to recognize that this was not just a security tool and a capability that was focused on law
01:04:18 enforcement, but actually a tool for consumers to be able to allow themselves to be able to
01:04:21 protect their own privacy and have better experiences. Anyone else like to add in? Yes,
01:04:28 I would just add that this is, I think, a very good example of how input data, what you're putting
01:04:33 into a system and the quality can make such a difference. In biometrics and facial recognition
01:04:38 fields, a photo that is done good lighting, close up, clean profile, such as a DMV photo or a photo
01:04:47 during processing and booking, that is much more likely to yield accurate results than, say, if
01:04:52 you're taking something in the field from a distance. If I just took my smartphone and tried
01:04:57 to click a picture of you now, or if someone was taking a photo at night. So it's, I think,
01:05:02 a prime example of when you're talking about something like during processing, why those
01:05:06 different situations can make such a difference about whether AI is reliable or not. It really is
01:05:12 highly situational. With either or other two witnesses, I'll give both of you an opportunity
01:05:17 before I yield back if anyone would like to add to the discussion.
01:05:19 Mr. Chairman, I'm over time, so I yield back.
01:05:25 Gentleman yields. The gentleman from Maryland, Mr. Ivey, is recognized for five minutes for
01:05:31 question. Thank you, Mr. Chairman. I appreciate that. And to my Republican colleagues, if you'll
01:05:37 relay to Chairman Green my appreciation for having this hearing. I think this is
01:05:41 an outstanding topic for us. In fact, we might even need to have additional hearings on this
01:05:47 topic because the time goes so quickly. But thanks again for having it. And to Mr. Amlani,
01:05:54 welcome from Maryland. I represent Prince George's County. You're just outside of our district.
01:05:59 Hopefully when that lease comes up, we can convince you to move a little further south, but
01:06:05 we'll take it one step at a time. I did want to ask the whack-a-mole question to you, Mr. Sikorsky,
01:06:11 because that's something I've been kind of worrying about quite a bit. We've identified a lot of
01:06:18 threats. And the challenge, I think, is that sometimes they can replicate very quickly,
01:06:24 faster than certainly a litigation approach can remedy those. Let's take the deepfake types of
01:06:34 imagery. We've had some of those pop up. Sometimes they're aimed at
01:06:37 embarrassing individuals. Sometimes, you know, revenge porn kind of things.
01:06:42 So since I don't know that litigation is fast enough to do it, and I think you mentioned that
01:06:48 your company's taken some steps to try and deal with that moving forward, I'd like to hear a
01:06:55 little bit more about it. But in addition to that, how can the government create incentives for the
01:07:00 private sector to do more on that front? Because I think it might have been the ranking member
01:07:06 mentioned, you know, we've got AI fighting AI. I think that's going to be better than government
01:07:12 fighting AI, or certainly the courts. How can we go about it in a way that allows us to keep pace?
01:07:18 Yeah, that's a great question, Congressman. I think there's some things that are already
01:07:23 happening that are really great with the government. A few things that I'm actually
01:07:27 pretty passionate about myself, things like collaborative defense. So back when I worked
01:07:31 for the NSA, you didn't tell anybody you worked there. Now there's a cyber collaboration center
01:07:35 that reaches out to industry and realizes that we can go much further with defending the nation if
01:07:41 we work together versus incomplete and other silos. And so continued push, like we've seen
01:07:48 with CISA and the JCDC, for example, has been very successful and is moving the needle. So keeping,
01:07:55 pushing hard on that front, I think is imperative. I also think that it's important to think about
01:08:02 cyber hygiene and what entities are doing. So companies that are out there, how are they
01:08:08 protecting themselves? I think CISA offers a great range of cyber hygiene and vulnerability
01:08:13 scannering services, for example, which is great. But one thing we lack somewhat is, like, what's
01:08:20 the report card on how efficient somebody's cybersecurity is, especially when you talk about
01:08:25 critical infrastructure, health care, and so forth. So maybe we should roll out metrics that we could
01:08:31 actually track over time, like mean time to detect, mean time to respond. How quickly are we actually
01:08:37 responding to attacks and sifting through all that noise? All right. And I want to monitor my time,
01:08:45 but just to follow up, if you could respond in writing, perhaps, but the Unit 42 Academy was
01:08:52 very interesting to me. I was wondering how that might be replicated. Perhaps are there ways that
01:08:57 the government could encourage other private entities or, you know, colleges and universities
01:09:02 that might be willing to do it, but find ways to expand that effort, too. And the too many
01:09:07 alerts points that you made earlier is another one I'd like to find out a little bit more about.
01:09:13 With respect to talent development, I appreciate the fact that, you know, there's efforts going on,
01:09:23 and Mr. Sikorsky, I think your company mentioned it, and Mr. Amlani, I think you did as well.
01:09:27 And I think that's going to be a good solution for the intermediate and long term. In the short run,
01:09:33 you know, I think we're importing a lot of the talent. And in some instances, we've had people
01:09:40 come and testify before this committee that they're coming from hostile states, frankly.
01:09:45 And so one of the things that I'm wondering about, since the government monitors it to some extent
01:09:52 on the way in through the, you know, the immigration process,
01:09:56 which has its challenges for sure, even with respect to these types. But the other is,
01:10:04 once these guys get in, and they go to these companies, how do we know that the company's
01:10:10 doing a good job of monitoring these individuals to make sure that they're staying on the right
01:10:14 track, they're not misusing their position, there's no economic espionage going on?
01:10:19 Should we be confident that these companies are doing a good enough job to make sure that the
01:10:24 United States is protected, and that their industries are protected from those sorts of
01:10:28 potential attacks? And I apologize to the chair for running into the line, but I appreciate the
01:10:35 chair's indulgence on this. Anyone who'd like to answer? Yeah, that's a great question. I think,
01:10:40 you know, I've been doing incident response for almost 20 years. And it varies. It's not just
01:10:45 nation states that threats, it's not just ransomware. Another big threat is insider.
01:10:50 We see insider threats, when when unit 42 is responding, where, you know, it's not just the
01:10:57 threat of them maybe putting something into the supply chain and getting it out the door, and that
01:11:01 kind of threat, which we know nation states are embedding employees in that way. But also, we've
01:11:07 seen where they go out the door, and then they have stolen data, and then they start engaging in
01:11:13 ransomware-like behavior, especially if their country doesn't have the ability to make money,
01:11:19 and has their economy cut off. So those are just some ideas I had. Anyone else?
01:11:26 Well, Mr. Chairman, thank you, and I yield back. Yeah, of course, and I appreciate it. I did see
01:11:32 your kind words when I was out in the in the lobby section out there, so I appreciate that. And I
01:11:37 will tell you that I think, honestly, and I would ask members for feedback on this, we need more of,
01:11:44 you know, the five minutes in a hearing room just aren't getting it, right? So what we what we're
01:11:48 what what I may do is have like a town hall type thing where we're the town hall, and they're
01:11:54 there on the and we're just asking questions, and it's more I think that it would be more informative,
01:11:59 and maybe some presentations, so to speak, on data poisoning for AI and all that kind of stuff,
01:12:06 and then under us understanding a little better to ask more informed questions. So
01:12:09 thank you for saying that. Work with me, and we'll get we'll get some more stuff on the books for
01:12:13 this. I appreciate that, Mr. Chairman. I now recognize Mr. Eazell, the gentleman from Mississippi,
01:12:19 for his five minutes. Thank you, Mr. Chairman, and thank you all for being here today and
01:12:23 helping us out here. It's a it this is something that we all are concerned about, and we appreciate
01:12:31 you being here today. The capabilities of AI are quickly advancing. This product, which when I look
01:12:38 at it feels futuristic, holds the power to significantly improve the way we work and go
01:12:44 about our daily lives. However, we also know that advancements in technology create new opportunities
01:12:52 for bad actors to operate effectively. I agree Congress must be closely monitoring this powerful
01:12:58 tool to ensure the application is not misused, but the government cannot get in the way of American
01:13:05 leaders in this sector and their ability to improve the product. The Chinese Communist Party
01:13:10 has intense ambitions to win the AI race. As I talk with people in the private sector,
01:13:16 it's clear we're in a global AI arms race. Our adversaries are innovating quickly, and it's
01:13:23 critical that we do not restrict the capabilities of American business. I'd like to just direct this
01:13:29 the entire panel. How can we ensure America's leadership in AI, and what government actions
01:13:38 could jeopardize this? Start with anybody who wants to answer.
01:13:41 Thank you so much for the question. This is a really critically important point for me. I think
01:13:55 continuing to stay ahead of our adversaries on technology requires both investment in our talent
01:14:05 and workforce. I just took my 15-year-old son on college visits, and I can tell you it's actually
01:14:12 very difficult to be able to get into an engineering university today. I think that
01:14:15 there's an unprecedented demand for people wanting to study AI and wanting to study cybersecurity
01:14:22 and wanting to study other types of engineering that are being left out of the workforce that are
01:14:27 at the college stage. They get intimidated by software engineering. And being able to make
01:14:33 that a part of the high school curriculum leading into college I think will also help in creating
01:14:39 more educational opportunities for individuals wanting to be able to get into the workforce
01:14:43 and learn those skills, not just at the college age but also going forward as they are progressing
01:14:49 through their careers. In particular, investing in companies and making sure that we are actually
01:14:55 hiring companies that have the best talent is another component. Those companies themselves
01:15:00 can recruit the best talent. They provide entrepreneurial worlds that allow individuals
01:15:07 to be able to create and solve problems in settings that are fun environments to be able to do that.
01:15:14 And I think if we can actually make a concerted effort through organizations like the Silicon
01:15:18 Valley Innovation Program to hire the right companies to be able to solve some of our
01:15:23 massive government problems is an important component to stay ahead.
01:15:28 Thank you very much. Anybody else like to say anything?
01:15:31 I would say that encouraging, whether it's via procurement, other incentives,
01:15:38 responsible development of tools and proper use of those tools is key. And making sure that we're
01:15:44 not simply trying to barrel ahead with reckless development or reckless applications. That's
01:15:49 not going to yield good results. As I said before, winning the AI arms race doesn't just mean the
01:15:55 fastest and broadest buildup. It means the most efficient and effective one. It's also, I think,
01:16:01 a matter of looking at our values. When we look at countries like China, Russia, etc., it is a
01:16:09 deployment of AI that is draconian in nature and that has supported those authoritarian regimes.
01:16:15 So I think it's important that we be not just a global leader in innovation, but a global leader
01:16:19 in values in this space and making sure that when we're promoting the use of AI, we're promoting
01:16:24 a use that is in conjunction with democratic values.
01:16:28 Thank you very much. Mr. Sikorsky, China limits its citizens' access to information through
01:16:36 censorship, and I assume they'll continue these restrictions on AI. How does America's freedom
01:16:42 and the capitalist economy help to attract global investment around AI?
01:16:50 Yeah, I think that's a great question. I think that one way that we do is just by that fact,
01:16:57 we're a little bit more open with regards to how we're able to innovate and in what regard
01:17:02 we're able to innovate. I think as sort of putting that together with your previous question,
01:17:08 I think about what is the high-risk AI models we're building that are impactful to our citizens,
01:17:15 things like maybe somebody applying for credit or for a university, right? Those kinds of things
01:17:21 are high-risk and maybe something we should need to hunker down on as far as, you know,
01:17:26 how are they being built up versus on the cybersecurity side, I think the risk is a lot
01:17:31 different and therefore like really pouring in a ton of innovation, which we're doing in industry
01:17:38 and we see organizations like CISA doing. And so us coming together with the government to really
01:17:42 iterate fast and thinking about the charge of how do we sift through the noise, which I've talked
01:17:47 about a few times of all this security alerts and actually make sense of it to find, you know,
01:17:53 the next big attack. Thank you, Mr. Chairman. I yield back and thank you all again. Gentlemen,
01:17:59 yields now as chairman, I'll take a privilege here just a second, shamelessly plug the academic
01:18:04 institutions in Tennessee, the university, Vanderbilt University just hired Nakasomi,
01:18:09 General Nakasomi to come lead cyber and the University of Memphis, University of Tennessee
01:18:16 are doing some exceptional research with Oak Ridge and Redstone and of course Tennessee Tech,
01:18:22 one of our engineering programs. Your son should check that one out too, an outstanding, they go
01:18:27 out to high schools and middle schools and they're starting cyber at a very early age. Great, great
01:18:32 schools in Tennessee. Sorry, I just had to do that guys. I now recognize Mr. Magaziner for
01:18:36 five minutes of questioning. He snuck in on you. I'm sorry, but he's, Mr. Thanedar, I'm sorry,
01:18:45 I looked at the name plate. Well, my name is Sri Thanedar, I represent Michigan and Detroit
01:18:58 and my question is for Mr. Zyrkowski. You mentioned in your testimony the need to secure
01:19:07 every step of AI app development life cycle. As we saw with the
01:19:14 solar winds hack, a supply chain vulnerability can have far-reaching consequences.
01:19:25 Can you speak in more details about what steps Palo Alto Networks
01:19:29 takes to minimize supply chain vulnerabilities? Yeah, that's a great question. Actually,
01:19:35 I was heavily involved and actually led the team that reverse engineered the solar winds backdoor,
01:19:40 which was pretty exciting to get to brief Homeland Security about that as we were unboxing it. I
01:19:47 think about the, you know, that is one of the biggest concerns we have when we think about
01:19:52 cyber threats is the supply chain attack. We've seen, there's been numerous other examples since
01:19:58 solar winds, which is now a few years in the past, where our adversaries are able to get into that
01:20:03 supply chain, deploy code, and then it gets shipped out to customers, which then essentially gives
01:20:09 them a backdoor into networks. At Palo Alto Networks, we're really focused on thinking about
01:20:16 how to eliminate that as you're building your code and as you're shipping out your products.
01:20:21 So, as I mentioned, there's sort of multi-different tiers that we're thinking about when it comes to
01:20:24 AI security, but one of the biggest is absolutely supply chain because people are very quickly to
01:20:29 develop this. They're pulling in technologies that are out there on the internet or otherwise,
01:20:34 and that makes a really good opportunity for an adversary to slip something in. And so,
01:20:39 we adopt technologies and are really focused, our AI, on eliminating those threats.
01:20:43 Thank you. And can you elaborate on what shortfalls the AI industry has demonstrated
01:20:50 when it comes to securing AI app development life cycles?
01:20:55 I'm sorry, I was out of the horn. Can you elaborate on what shortfalls the AI industry
01:21:03 has demonstrated when it comes to securing AI app development life cycles?
01:21:09 Yeah, I think one of the things I think of is, you know, when we're rushing out new technologies,
01:21:18 we're often not taking the time because, you know, often what happens is there's companies
01:21:23 that are pitted against each other, who can get this AI technology out to their customers first,
01:21:28 will win the industry. Like, that's what we're talking about, because how fast AI development
01:21:32 is working, and we've all talked about how much it's going to evolve our life over time,
01:21:36 and it's inevitable. And I think that when you do that, people end up rushing, doing things,
01:21:43 and cutting corners to get those technologies out, not thinking about security. Shocker,
01:21:49 that's the thing we dealt with with a lot of innovation over time. And I think that making
01:21:54 sure that we're actually putting in mechanisms to think through how, what is people's security
01:22:00 posture? Are they doing the right things as they're shipping out applications? Because to
01:22:05 a large extent, software development in our country is a national security issue
01:22:10 when we talk about the supply chain. And this is a question for the whole panel.
01:22:14 The world has not yet seen a major AI-driven cyber attack. On the scale of attacks like
01:22:24 Notepadia or WannaCry, what is the likelihood that we see an attack on this scale,
01:22:36 or worse, in the next five years? Anybody. I'll take that one as well. You know,
01:22:46 as a cybersecurity guy. So one thing I think about is that is an absolute threat and huge
01:22:53 concern I have, because we saw ransomware spread like wildfire, take down systems with Notepadia,
01:23:00 like you mentioned. The shipping industry was impacted with that. And I remember seeing
01:23:05 grocery stores for one of the big changes completely knocked out. People couldn't even
01:23:10 buy groceries. And then I also think about my experience with SolarWinds and the fact that,
01:23:15 just imagine that Russia was using AI in that attack. They would have been able to focus more
01:23:23 on getting their hooks into systems for which the SolarWinds attack gave them access. And if they
01:23:29 could have done that in an automated way, rather than using their human army, it would have been
01:23:34 much more of an efficient attack. All right. Anybody else? Thank you.
01:23:38 It's a serious challenge because it's a type of scenario where AI can be the weapon or it could
01:23:45 be the target. So as we've discussed a bit here, it might be something that's used to facilitate
01:23:50 efficient attempt. You could use something like a deep hit fake to augment efficient attempt,
01:23:55 maybe instead of just sending an email, then you get a fake call, voice message, or even a video
01:24:01 saying, "Oh, open up that email I just sent you," that makes someone less suspicious and helps
01:24:05 facilitate that. It's also potentially a target point. Critical infrastructure, they use lots of
01:24:10 AI systems in ways that might not at all be related to national security. But if there's
01:24:15 a vulnerability there, there's some sort of manipulation of AI systems, then that could
01:24:20 also be a vulnerability point. Maybe perhaps how we might use AI to distribute power along the grid
01:24:27 or how we use it to check to people coming into a stadium or large scale event, if those AI systems
01:24:33 and the algorithms behind them were a target of an attack or subject to manipulation that could
01:24:38 raise a host of risks. Thank you so much. And Mr. Chair, I'll take my own seat next time.
01:24:43 Sorry for the confusion. Yeah, they got it. Poisoned the data there. Thank you. And sorry,
01:24:51 I got your name wrong. I now recognize the gentleman from Texas, Mr. Pfluger, for five
01:24:56 minutes of questions. Thank you, Mr. Chairman. I think it's an important discussion. I'm glad
01:25:00 we're having it. Thanks to the witnesses for being here. The public-private partnership that we have
01:25:06 throughout the United States between government, you know, including Department of Homeland
01:25:10 Security and other agencies, and then private industry, including, and I think importantly,
01:25:14 critical infrastructure, I don't know that it's ever been as important as it is today. And so I
01:25:20 thank the witnesses for being here because I think the goal is for us to be able to keep up.
01:25:25 One of the ways that we can keep up and one of the ways that we can continue to train the next
01:25:29 generation is actually happening in my hometown at Angelo State University. They are a cyber center
01:25:34 of excellence. They are a Hispanic-serving institution. I think that's very important.
01:25:37 It's a very rural area, but they also have a very strong focus in addition to just general
01:25:44 cyber issues on AI. In San Angelo, we have Goodfellow Air Force Base, which is a joint
01:25:52 intelligence training base. And so you pair these up together, you've got university work being done,
01:25:57 you have the military that trains our intel professionals, the majority of them train
01:26:02 at that location. And so I want to put a plug in there for that. And I may ask some questions on
01:26:06 that, but let me go to Mr. Sikorsky. In your testimony, you talk about how adversaries are
01:26:13 leveraging AI. And I'd just kind of like to get your, whether it's phishing or some other older
01:26:18 technique or whether it's a new technique, maybe talk to me about how you see adversaries
01:26:23 increasing the scope, the scale, and actually the threat using AI.
01:26:28 Yeah, that's a great question, Congressman. I think that, you know, it goes to the question
01:26:33 earlier, right? Like the what if, if some of the attacks we've seen in the past leveraged AI
01:26:38 in a large-scale attack. I also think about the evolution, right? So the first thing that the
01:26:46 attackers went and we're monitoring them, you know, one of the things unit 42 does is threat
01:26:50 intelligence or monitoring the dark web. We're seeing what they're talking about in their forums,
01:26:55 what they're selling to each other, the access to networks, but also talking about how to leverage
01:27:01 this technology to create really efficient attacks. The big focus so far has been on
01:27:06 social engineering attacks, which means things like phishing, which we talked about,
01:27:10 gets you to click on something you wouldn't otherwise. And then also manipulate you to get
01:27:15 like multi-factor authentication, you know, where you need that extra token to log in.
01:27:19 They're really focusing there as well. Where we start to see them, you know, poking around is
01:27:25 using AI to be able to do better reconnaissance on the networks they're attacking to know,
01:27:31 you know, what holes are in networks across the world. And then also starting, they're starting
01:27:36 to wade into how can they develop malware efficiently and variations of it so that
01:27:42 it's not the same attack you see over and over again, which goes back to the point of how do
01:27:47 you fight against that, which is you need to develop technologies that are really efficient
01:27:51 using AI to see those variations. I want to ask all of you in the last two minutes to speak to
01:27:58 this model that Angelo State University is using, where they're training students, and those
01:28:04 students may go into the NSA, they may go into the military, they may go, you know, in the private
01:28:10 sector. What should Angelo State and other universities be doing? We'll start with Mr.
01:28:14 Laparuk and then go down. We've got about a minute and a half. I mean, I'm not a hard science person,
01:28:21 so I probably have less to add than my colleagues, but I would just say I think I found in my field
01:28:25 it's invaluable for people in the policy space like me to learn from people who understand the
01:28:30 tech and people who are developing the tech to understand, to learn from people who are
01:28:34 experienced in policy, experienced in protecting civil rights, civil liberties, and values.
01:28:40 You know, it helps with sort of how we can translate across to each other. It also helps as
01:28:43 you're designing these systems to think about what's the best way to do it and how we can
01:28:48 actually do it in a way that's going to promote what we care about as a society. Thank you. Mr.
01:28:52 Mlani? I think, first of all, I think actually there's a vocabulary around cyber security that
01:28:57 is a very important component to be able to educate our youth on. Everything from spear phishing,
01:29:01 you know, you mentioned all the other types of attacks. I think these are important
01:29:07 things to be able to teach people about to make sure that they themselves are not hacked,
01:29:10 but also they understand what the attacks are actually happening and how to guard against them.
01:29:15 Identity is a big, big component of all cyber attacks. It's about 91 percent of all cyber
01:29:19 attacks are actually originated with a credential that's been stolen, that's been hacked, that's
01:29:25 been compromised in some way, shape, or form, and then malware gets implanted within a system that
01:29:30 can then be activated afterwards on an ongoing basis. And so having better identification
01:29:34 mechanisms to make sure that individuals don't have to change their password every day,
01:29:39 they don't have to have multiple things that are very hard to remember. Would you say that
01:29:42 the universities should be researching this, that they should be doing it in maybe even a
01:29:46 classified setting, that they should be working on these types of techniques and partnering with
01:29:51 government agencies as well? Government agencies as well as commercial sector. Okay, very good.
01:29:56 My time has expired. I'm sorry, Mr. DeMurray, I didn't get a chance to talk to you, but
01:30:01 I yield back. The gentleman yields, Mr. Suozzi. I think Mr. Menendez has asked that you go ahead
01:30:08 of him, and so you are recognized for five minutes. Thank you, Mr. Chairman. Thank you
01:30:12 so much for doing this hearing. I'm very interested in the idea of you doing some
01:30:15 sort of off-site retreat. Could delve into this in more depth. Thank you to the witnesses.
01:30:19 When I listen to you and I think more and more about AI, the more terrified I become,
01:30:25 quite frankly. I know there's great opportunities, but I'm scared. In 1974, I was in seventh grade,
01:30:34 and I remember Sister Ruth at St. Patrick's grade school saying, you know, technology's moving so
01:30:40 fast these days that we don't have a chance to figure out how it's affecting us, how it's
01:30:45 affecting our values, how it's affecting our families, how it's affecting our society,
01:30:50 how it's affecting – it's just moving too fast. And that was, you know, because of the space race
01:30:54 and electronics. Atari had just come out with Pong in 1972 or something. So, you know, think about
01:31:00 how fast everything's moving now. And, you know, think of all the impacts we've seen from social
01:31:05 media on our young people, where these polls come out and say, you know, 35 percent of kids are
01:31:11 patriotic, whereas, you know, for boomers like me, it's 70-something percent. Nobody believes in any
01:31:17 religion anymore, any institutions, and we have all these kids with mental health issues related
01:31:23 to their self-esteem. So everything's moving so quickly, and I'm confident that you're going to
01:31:29 figure out how to protect, as Mr. Sikorsky talked about, he represents 90 of the Fortune 500.
01:31:35 And we're going to fight our way through to protect our businesses, to make sure that they're
01:31:40 looking out for their security and interest. And hopefully we'll think of more things with
01:31:47 our government. But I'm worried about our kids. I'm worried about our senior citizens
01:31:52 getting scammed. I'm worried about people using our voices to say things we didn't say.
01:31:58 I'm worried about our society that's so divided and our foreign adversaries,
01:32:02 Chinese Communist Party, Russia, Iran, trying to divide us more by using social media and our
01:32:08 freedom, and certainly using, I think, AI and deepfakes to try and divide us even more.
01:32:16 So I'm worried about this divisiveness. I'm worried about fraud. So I'm going to give you
01:32:24 the rest of my time, and I'm going to ask each one of you. You talked about values, Mr. LaPerucre.
01:32:28 I'm worried about the values in our country, just like people that believe in our values. Forget
01:32:32 about the fact that there are other countries that don't believe in our values at all. I'm worried
01:32:36 about our own country and our values and promoting those values. So each of you, just give me 30
01:32:42 seconds each, if you can. What's the number one thing on the big picture you think we need to be
01:32:47 focused on right now to address not the positive parts of AI, but the scary parts of AI? So what's
01:32:56 number one on your mind? Mr. LaPerucre, you go first. I would say it has to be a comprehensive
01:33:02 approach from the creation of systems to input of data and inputting in proper situations to
01:33:08 retrieval and reuse of results. And that's something that there have to be a lot of factors
01:33:13 to bring together in the national security space because it's so often built in secrecy.
01:33:18 That means we have to find mechanisms for transparency and oversight because you don't
01:33:22 typically have that sort of light that you can shine on potentially in the private sector and
01:33:26 business practices or even other parts of government. So I think we have to find ways to
01:33:30 promote that oversight, making sure that we're upholding those principles for responsible use,
01:33:35 even when you don't always have the same level of public insight. So oversight to watch the input
01:33:40 of data? Oversight and everything from procurement to what data is being input to how you're using
01:33:46 data, what kind of human review and cooperation you have, how much you rely on it. It has to be.
01:33:51 That sounds awfully big effort. Okay. Mr. Amlani?
01:33:54 I mean, as a father of three children, all of whom use digital tools regularly, I think first off,
01:34:03 we're placing a large responsibility on parents to be able to monitor the digital communications
01:34:08 of our children. There aren't the age verification mechanisms currently today to be able to provide
01:34:15 a gateway to make sure that the right people are accessing the right systems at any point in time.
01:34:20 I think it goes back to the identity layer of the internet that I mentioned before.
01:34:23 As you mentioned, there's all kinds of online bullying, extremism, recruiting, other things
01:34:30 like that that are happening online. It's all being done not just in the dark web, but actually
01:34:34 out in the open. It's because we don't have the ability to be able to properly recognize
01:34:39 who people are online. It creates mechanisms and difficulties for making sure that we can
01:34:46 actually have stronger values enforced and allow parents to be empowered.
01:34:51 Okay. Better IDing people. Mr. Sikorsky?
01:34:55 Yeah, I agree with both those takes so far. But the other one that I would consider and put out
01:34:59 there is the education piece. I talked heavily already about cyber education and the workforce
01:35:04 that's going to defend the nation next as being paramount. But there's also an education piece
01:35:09 for the rest of society when it comes to security and AI and disinformation. Knowing that what
01:35:18 you're seeing on your phone may or may not be real. People in an age where people are getting
01:35:23 their news by just scrolling through their phone second after second, I think that's something that
01:35:28 really needs to be considered. And then how do we eliminate those kinds of things that are not real?
01:35:33 Mr. Chairman, can we let Mr. Demmer answer? He hasn't been able to answer a question for a while.
01:35:37 Go ahead, Mr. Demmer.
01:35:38 Thank you for the question, Congressman. I agree with you on the societal and technological
01:35:43 advancements creating some downstream impacts that are unintended. I personally believe that
01:35:51 technology advancement that is happening today creates a promising future for all workers to
01:35:59 be able to upskill, have more fulfillment in their work and to be enabled with these tools
01:36:03 and technologies. But it all starts with the critical data. If we don't have good data going
01:36:08 into these systems, then garbage in, garbage out. Thank you. Thank you, Mr. Chairman.
01:36:14 The gentleman yields. I now recognize the chairman of the Cyber Subcommittee, Mr. Carparino.
01:36:21 For five minutes, question.
01:36:23 Thank you, Mr. Chairman. Thank you all the witnesses for being here.
01:36:25 My colleague just before, he was talking about 1974 and Sister Ruth,
01:36:29 and I was like, where was I? And I was like, oh, wait, I wasn't alive. 1984, though.
01:36:33 Yeah. Mr. Demmer, recently, one of our biggest concerns has been the prepositioning of Chinese
01:36:46 state actors in U.S. critical infrastructure, meaning they are postured for a cyber attack if
01:36:51 conflict with the United States were to arise. U.S. intelligence community has said that AI has
01:36:56 been helping them detect this activity, given it is much more challenging to decipher than
01:37:01 traditional tactics that involve malware injection. How have you been thinking about this challenge?
01:37:07 Absolutely. So the threats are dual. There's the cyber threats that I think,
01:37:13 you know, there's others on this, witnessed and that can best answer that. On the physical
01:37:20 infrastructure, specifically our critical assets, is a vulnerability. And we've seen that.
01:37:25 You know, GECCO is a proven partner for the U.S. industrial base, both public and private sector.
01:37:31 And we need to ensure, you know, energy security, you know, critical infrastructure like roadways,
01:37:37 bridges, dams, these are all susceptible to physical attacks. And ultimately, you know,
01:37:44 creating wins for the industrial base enables us to fight some of these other existential threats
01:37:49 as well. Thank you. I appreciate it. Mr. Sikorski, I understand that you teach cyber
01:37:55 security at Columbia University, which undoubtedly means that you have a front row seat at our
01:38:00 nation's cyber talent, up and coming cyber talent. In your view, what does the government need to do
01:38:08 to bring more, to bring sufficient numbers of people into the cyber workforce? Yeah,
01:38:13 it's a great question. I think back to my experience, right, when I was in school,
01:38:18 it wasn't until I was a senior in college in New York City and 9/11 happened that I was like,
01:38:24 I want to go work for the NSA. Like, that's what I want to go do. And but I didn't really think
01:38:30 about that before. It was things like video games and stuff like that. Right. And I think getting
01:38:35 people excited earlier on is a really good place that we can focus, right, of like, there's cyber
01:38:43 security hacking clubs, like at universities and high schools, and this gets people really excited
01:38:48 to gamify it and play along. So I think, you know, while we look at the workforce gap, I think as
01:38:55 power of the networks goes, I already mentioned our, our workforce capability, where we have like
01:39:00 the unit 42 Academy, we're bringing people in, and who don't maybe have the hands on experience,
01:39:06 we're giving it to them with on the job training. And then I also think about government programs,
01:39:11 like I was in when I went to the NSA, I was in a technical development program there,
01:39:16 where I got ramped up on all the things that I didn't learn during my education previous to that.
01:39:21 So those types of programs that are feeders, I think are really powerful in getting people to
01:39:26 level up the skills that maybe they didn't get during college. So that was actually, you know,
01:39:30 pretty much already answered my second question. How does it well, not technically, not all the
01:39:35 way. You have these clubs, you have these feeder organizations. What's Congress's role in further
01:39:43 improving those? Or do we have a role? I mean, how do we because it's, you know, over half a million
01:39:48 jobs, cyber jobs and openings in the US. I mean, that's, that's what keeps me up at night, you
01:39:55 know, that we don't have the workforce to defend against the cyber attacks. So, and AI can only
01:39:59 bring us so far, we still need that human element. So do we have a role? And what is it in your if
01:40:05 you've thought that far? Yeah, I think absolutely do. I think there's an ability to create these
01:40:11 types of programs, right, where it makes it really easy for people to apply and get into,
01:40:16 to the point made earlier about, hey, it's hard to to get into schools that have these programs
01:40:24 available. And I think we we often think that, oh, well, it needs to be a very specific cyber
01:40:30 program that they're doing. Some of it is they can learn those types of skills on the job when
01:40:34 they get in. And it's more about building out the broad base of technical capability
01:40:39 in our workforce. And I think that that's, you know, one one great area. I do think that there's
01:40:45 a lot of government agencies like CISA that have information out there where people can learn and
01:40:50 train up. I think there's a lot of virtual education kinds of things going on that are
01:40:54 very powerful. So I think just thinking about how to how to drag in that workforce and some of that
01:40:59 is even earlier on, right. So we're talking sometimes giving people skills. And I remember
01:41:03 when I taught the FBI, it was like, all of a sudden, these people were cyber agents, and they
01:41:08 had none of the background in cyber. And they know technical computer science background. And it was
01:41:13 really challenging. So it's not just a snap your fingers, train them up in a month. It starts
01:41:17 earlier. Yeah, and I think I'm out of time. But companies, I think need to start focusing on
01:41:22 skills based hiring instead of degree based hiring. I think that'll help too. Chairman, I yield back.
01:41:27 The gentleman yields. I now recognize the congressman who represents Palo Alto, I think.
01:41:34 Mr. Swalwell, five minutes. Great. And Chair, thanks for holding this hearing. It's so important.
01:41:40 Absolutely. And I think we're at our best as a committee when we're taking on issues like this.
01:41:47 And I've enjoyed working with the gentleman from Long Island, Mr. Garbarino on the cyber committee.
01:41:52 I think we're doing a lot of good work there. Mr. Laparuk, I was hoping to talk to you a little bit
01:41:57 about something that Californians are very concerned about and creatives everywhere.
01:42:05 You know, the entertainment industry is the second largest jobs engine in California. And it's not
01:42:11 just folks who are on screen. It's folks who are writers, editors, a part of the production teams
01:42:19 off screen. And AI, certainly, it's the future. There's no ignoring it. There's no putting it
01:42:25 back in the bottle. It's the future. And so we have to embrace it. We have to shape it,
01:42:30 put guardrails on it and contours around it. But when it comes to the creative community,
01:42:38 you know, the example over the weekend of what happened to Scarlett Johansson
01:42:42 with her voice essentially being stolen from her for an AI product. What should we be thinking
01:42:49 about to make sure that we protect artists and creators from this, but still, as I said,
01:42:58 embrace AI and recognize that this is where we're going?
01:43:03 Well, I think it's going to be important that we find ways to sort of try to be proactive
01:43:08 and anticipating what people's rights are going to need to be. I mean, a situation like that,
01:43:13 I mean, you know, probably not something that was even imagined or contemplated when the movie
01:43:19 where she played in AI, I think it was four or five years ago, came out. And this is something
01:43:23 that's come up a lot, I think, in the recent, the writer's strike, the actor's strike is,
01:43:28 how do we build in those protections now, not just for how AI is being used right now in
01:43:33 this industry, but also how's it going to be used in five years? How's it going to be used in 10
01:43:36 years? So, you know, looking at workers' rights is a little outside of my field, but from just
01:43:42 the technology standpoint, it moves so fast. I think it's important to be proactive and thinking
01:43:47 about not just current risks and factors to care about, but what do we need to care about down the
01:43:52 line that we might not be ready for when it comes up. And when you talk to creatives, they're not
01:43:57 opposed to AI, and that's such a basic hot take, which is, oh, they oppose AI. They don't oppose
01:44:03 AI. They just want rights, and they want, you know, and they're willing to engage around their
01:44:09 likeness and their voices, but they should be compensated. And the majority of people who are
01:44:14 writers and actors are people you've never heard of, but this is their livelihood. And in California,
01:44:21 we're especially sensitive to that. I wanted to ask Mr. Imlani, because you're, we're in the same
01:44:27 backyard in the Bay Area, and the chairman alluded, you know, that our tech culture has created,
01:44:34 you know, so many opportunities in the Bay Area. But I do worry about with AI, and I have a
01:44:42 community in my district called Pleasanton. It's one of the wealthiest communities in America,
01:44:46 and you've heard of it. I have other places in my district, like San Lorenzo, and Hayward,
01:44:53 and Union City, and they have some of the poorest communities in the country with schools that don't
01:45:00 have enough resources. And those kids have dreams as big as kids in Pleasanton. And so I just fear
01:45:08 that at, you know, a child's earliest days in schools, that there's going to be two classes
01:45:15 created, those who are given access to AI in their classrooms, and those who are not. And so what can
01:45:22 the private sector do, because you're often really some of the best solutions out there, to partner
01:45:29 with school districts to make sure that you're imparting your knowledge and skills on places
01:45:35 that need it, but most recognizing that you're going to have a need to recruit talent down the
01:45:40 track as well. Sure, thank you so much for the comments and questions. Congressman, obviously
01:45:50 this is a pretty personal issue with me, but I think with regards to actually allowing people to
01:45:55 have access to the technology, and in particular, it's interesting the way that AI is actually
01:46:01 democratizing itself, and it's making itself available to everybody. It's as much of a concern
01:46:08 to make sure that actually everyone has access to it and is actually able to have these tools,
01:46:14 but also people that have gone through large universities and master's degrees and PhDs,
01:46:20 now a lot of that content and knowledge is available at people's fingertips who have
01:46:26 never gone through any of those programs. And so, you know, with different AI tools that are now
01:46:32 available at people's fingertips, you can now code, you can now write apps, you can now create
01:46:36 content. You know, I've got my 12-year-old creating music right now. This type of democratization of
01:46:42 the AI capabilities is both an opportunity, but also a massive threat. It really does upskill
01:46:49 many cyber criminals around the globe to be able to attack systems, people that are not as well-off
01:46:56 potentially and would love to have the ability to be able to create malware that could potentially
01:47:01 create a ransom payment. And so those types of opportunities to be able to educate the youth
01:47:06 and make sure that they know how to use it responsibly and for the right aspects are
01:47:11 something that I think our society needs to embrace and do a lot more of. Great. Thanks.
01:47:16 Thanks again, Chairman. The gentleman yields. I now recognize Mr. DiEsposito, the gentleman from
01:47:22 New York, for five minutes of questioning. Well, thank you, Mr. Chairman, and thank you for taking
01:47:26 the committee in this direction. I think that it is an issue that really affects every corner of
01:47:32 the United States of America and obviously our world and has real promises for the future.
01:47:37 Just last week during police week, I chaired an emergency management and technology subcommittee
01:47:44 hearing with the law enforcement and intelligence subcommittee hearing about the use of drones in
01:47:50 the first responder world. And we heard a lot about the expanding field and how law enforcement
01:47:56 agencies are utilizing drones to assist them in doing their job. And I have to say, as someone
01:48:01 who spent a career in the NYPD, it was good to hear and promising that they're embracing technology
01:48:07 about how to handle the issues that are plaguing so many cities across this country. Obviously,
01:48:14 as we embrace technology and as the field expands, we meet challenges and we find challenges. So
01:48:21 listening to all of you speak about the power of AI to assist the United States from attacks
01:48:28 from our enemies, it seems that there may be a space for AI in these drones. So generally speaking,
01:48:35 any of you could answer, is AI already being used in drones, either by those
01:48:40 in law enforcement, the government or privately?
01:48:42 Congressman, thank you for the question. Being in a related field with doing wall climbing
01:48:52 robots, primarily, I can say that AI is useful for using these smart tools properly. Everything
01:49:00 from localizing data, ensuring that the data point is synced to the location on a real world asset,
01:49:06 to processing millions of data points, or in this case, visual images. We heard a little bit earlier
01:49:12 drones being used as well to secure the border. So there are definitely applications here for that.
01:49:18 And you mentioned data. So obviously, drones are utilized by law enforcement to collect that data,
01:49:24 whether it's audio, visual, location data, GPS, among others. So that information that's
01:49:31 collected, we need to make sure that it's collected correctly and obviously kept private.
01:49:38 So is there a role that AI can play in ensuring that this information remains secure and doesn't
01:49:45 give the bad guys access? Absolutely. And I defer to my fellow witnesses here on specific
01:49:52 policy recommendations for cybersecurity. But GECO collects data, massive amounts of information
01:49:58 on the physical built world. And we take very seriously the responsibility of securing that data
01:50:04 following requirements like NIST 800-171, NIST 800-172 for these standards of securing the data,
01:50:12 and also providing training to the entire workforce so that they know how to properly
01:50:16 handle any type of information, classified information. Any of the other witnesses have
01:50:22 recommendations with regards to that? Yeah, I could just add that securing the models,
01:50:31 the data that's used to build them is critical. One of the things we deal with the most when doing
01:50:36 incident response, especially ransomware attacks have actually shifted. They no longer
01:50:42 encrypt files on disk anymore. They just focus on stealing data and then use that to extort victims.
01:50:48 And so securing the crown jewels of data, which is what most entities have, is paramount.
01:50:54 During the, I'm sorry, I'm still, I'm Lonnie, did you have something?
01:51:00 Sure. This is a defense innovation unit for the autonomous systems portfolio. Part of my role
01:51:11 years ago was actually to manage autonomous systems within drones. And I think one of the
01:51:15 biggest concerns is actually Chinese made technology within drones and making sure that we create a
01:51:20 list of different drones that people could potentially use in law enforcement and other
01:51:25 avenues to make sure that we have a safe list of drones, and which is something that defense
01:51:29 innovation actually did create. So that's actually, you're leading into my next question. It's almost
01:51:34 like we planned this. So part of the conversation that we had last week from the NYPD was the fact
01:51:40 that they currently utilize Chinese technology in their drones, and they're working to eliminate
01:51:45 them from the fleet because of the issues and the concerns that we have. And my colleague from New
01:51:50 York, Ms. Stefanik, has introduced legislation recently that would help law enforcement make
01:51:55 sure that they only purchase American made technology, American made drones. But obviously,
01:52:00 those Chinese drones are still in our atmosphere, are still being utilized by first responder and
01:52:06 law enforcement agencies. So just quickly, how can AI help us mitigate the threats that they pose?
01:52:11 There's also a significant amount of work being done by Defense Innovation Unit and other
01:52:20 agencies on mitigation of drones and counter drone work. So AI use for counter drone work is also
01:52:27 another way to be able to mitigate those. Excellent. My time's expired. Mr. Chairman,
01:52:31 I yield back. The gentleman yields. I now recognize the gentleman from California,
01:52:35 from Los Angeles, Mr. Garcia. Thank you, Mr. Chairman. I appreciate you holding this hearing,
01:52:41 and I want to thank all of our witnesses for being here as well. It's truly amazing how fast
01:52:46 AI has evolved and continues to evolve every day. And it used to be something, of course,
01:52:51 a lot of us would read about or see in movies and read about in science fiction novels, and that's
01:52:56 changed so much. And just the progress of AI in just the last six months to a year has been
01:53:03 startling, but also I think shows the promise of what AI can actually do in our lives and
01:53:08 how we can also improve our lives as Americans and as a world. I like a lot of folks here,
01:53:14 there's concerns as it relates to our own security, our own homeland security challenges we
01:53:19 have here in this country. But I also want to focus on the challenges it faces to our own
01:53:23 democracy and our elections process. A lot of my colleagues have brought that up in the past,
01:53:28 and there's bills to address that, of course, here in the Congress as well. You know, just in the
01:53:33 last 24 hours, DHS issued a warning ahead of the 2024 elections of threats that AI poses to our
01:53:40 very elections. Earlier this year, the World Economic Forum announced that AI threatens and
01:53:45 continues to threaten global democracy, not just here in the US, but across the globe. And this
01:53:50 assessment was actually echoed just last week by our own intelligence community. The DNI intelligence
01:53:58 officials have testified in the Senate that since 2016, when Russia first tried to meddle in our
01:54:02 elections, that the threats are happening from foreign actors even more so as we know today,
01:54:08 Russia, China, Iran, others competing to influence not just their own countries, but what's happening
01:54:14 in our own elections here in the US. And that's actually a very danger to all of us and a concern
01:54:21 for us as well. Now, this past January, we know that voters in New Hampshire were exposed to one
01:54:24 of the most high profile recent cases of AI. We know that there was a robocall, for example,
01:54:30 impersonating President Joe Biden, telling people not to vote. That attempt, of course, was
01:54:34 identified, stopped, and investigated, which I think we're all very grateful for. But we can see
01:54:40 that those types of efforts could cause enormous havoc, especially in elections that were close,
01:54:47 that were targeted in states or in communities across the country. We already know that there's
01:54:51 been unsuccessful and successful attempts to undermine our elections in 2016 and 2020, and
01:54:58 folks are trying to do it again in 2024. And so the rise of AI, deep fakes, and other technology
01:55:03 is very concerning, I think, to the Congress and to all of us. I'm also especially concerned because
01:55:10 one of both the advantages in the US is that we have a lot of our elections are decentralized.
01:55:16 Our elections are not run generally by the federal government. They're run by states,
01:55:21 counties, towns, cities, oftentimes small communities without a lot of resources or the
01:55:27 technology to know how to actually deal with the oncoming wave of AI attempts to actually meddle
01:55:37 in all these elections. And so I'm really concerned about how these smaller agencies,
01:55:40 these city clerks, actually are able to take on AI and have the resources to do so. We know that
01:55:47 the new DHS Artificial Intelligence Task Force, which is coming out of Homeland Security, which
01:55:52 I think is really important, but this is a huge, I think, responsibility of this committee, is how
01:55:56 we provide and get assistance to these clerks and county recorders across the country. Mr. Leper-Rouch,
01:56:05 I know that the CDT has done a great good deal of examining some of these risks. What additional
01:56:10 safeguards do you think need to be put in place as it relates to this election piece, and how can
01:56:15 Congress better support all these election organizations and officials across the country
01:56:22 that have a decentralized system, and how are they supposed to get support? That's, I think,
01:56:27 very concerning. As you're emphasizing, the decentralized natures of our elections, which
01:56:32 in some ways provides a lot of assets, is also a significant challenge when you're facing threats
01:56:38 like this. So we, fortunately, over the last decade have built up a robust network of federal
01:56:43 entities that provide support for election security from things such as cyber threats. I
01:56:48 think we should supplement that by having those same entities serve as means for highlighting AI
01:56:54 threats, whether that's of a specific type of attack or a misinformation campaign that's going
01:56:59 around using that to disseminate information, but also for more general awareness and education that
01:57:05 can be brought from that small set of federal entities and disseminated outwards to that
01:57:11 broad network, trying to educate about, here are the types of AI technologies, here's how they might
01:57:15 be used for everything from trying to use AI to create spam FOIA requests to overload your office,
01:57:22 to building up fake materials about polling information, to using AI for spear phishing.
01:57:27 And I'll say just finally, I think even though we have for, you know, our elections, obviously,
01:57:32 for president or U.S. senator, these are federal elections, but they rely on data from local towns
01:57:39 and counties and registrars that just send their information and their voting information up through
01:57:43 the process through the state, and then eventually, of course, states will certify elections. And so
01:57:48 the real concern is that you can micro-target, of course, precincts, you can micro-target small
01:57:53 towns and cities with these AI disruptions that can have real impacts in certain states
01:58:00 to presidential elections. And I think that's something that I think we have to really consider
01:58:04 and think about as we move forward and how we get these small towns and city clerks the technology,
01:58:10 but also the education that's needed to take on these deep fakes and AI concerns. So with that,
01:58:16 I yield. Thank you. Gentleman yields. I now recognize the gentlelady from Florida,
01:58:21 Ms. Lee, for five minutes of questioning. Good afternoon, Mr. Sikorsky. I would like to go back
01:58:28 to your testimony where you mentioned that our cyber adversaries are utilizing AI to really
01:58:37 enable some of their malicious behavior, like creating more sophisticated phishing attacks,
01:58:43 finding new attack vectors, enhancing the speed of lateral movements once they do intrude upon
01:58:50 a network. I'm interested in hearing more about how we are using machine learning to build and
01:58:58 enhance our cyber defenses against these kinds of attacks. You mentioned in your written testimony
01:59:04 precision AI, some other tools, including some revisions to how the SOCs are operating. Would
01:59:10 you elaborate for us on how machine learning can enhance our cyber defenses? Yeah, it's a great
01:59:16 question. And actually, Palo Alto Networks and myself have both been involved on this machine
01:59:22 learning AI journey for over 10 years. This isn't -- while chat GPT and other technologies like that
01:59:28 have gotten really popular really quick, we've been leveraging that type of technology for quite
01:59:33 some time. Myself specifically on malware -- anti-malware technology to be able to detect
01:59:40 malware on a system. And so we've been training models to do that. We've been for quite some time.
01:59:47 And that detects sort of variation in what we're seeing and making sure that variants of malware
01:59:53 that come out will be stopped due to that training. Then there's also the idea of sort of leveraging
02:00:00 AI to get rid of the noise. And that's really the more recent evolution that we've been focused on
02:00:07 at Palo Alto Networks, where we're trying to say everybody's inundated with all these tools.
02:00:12 I go back to the SolarWinds example. I did numerous incident responses to that after that came out,
02:00:20 went on to corporate networks. And one of the big problems that they had was they actually
02:00:25 detected the attack. They detected the malware being dropped on a system. They detected the
02:00:30 lateral movement. But they didn't know it was there because they were flooded with so much noise.
02:00:36 And so what we're doing is we're taking our technology and very much focusing on how to
02:00:40 put these alerts together, reduce the amount of information so that the brains of the analysts,
02:00:46 who by the way are getting burned out by having to look at so much data that makes no sense to them,
02:00:51 instead it gives them a chance to zero in on the attack, figure it all out,
02:00:55 and then actually move the needle. >> You also mentioned in your testimony
02:01:00 at a point about the unintended consequences of disclosure. And I'd like to go back to that,
02:01:06 particularly raise a concern that public disclosures that require information about how
02:01:14 we are using AI to train and defend networks, that requiring disclosures of a certain type could
02:01:22 unintentionally create a roadmap for the cyber adversaries and reduce our overall security.
02:01:28 I'm interested if you were a policymaker, how would you balance disclosure requirements
02:01:35 with not alerting our adversaries to the type of information that we don't want them to have?
02:01:41 >> Yeah, that's also a great question. I think about it's all about like what is your end goal
02:01:48 with respect to the AI that you're trying to get to customers or to protect the network? I think
02:01:54 you've got to think about the risk level involved there and think about the tradeoff, right? The
02:01:59 more that we regulate things and think about what's there and really put a lot of pressure on
02:02:05 on oversight, it could slow down the innovation and the protection. I think it's the appropriate
02:02:10 thing to do when we're talking about somebody applying for a home loan or something like that,
02:02:14 thinking about every step of the security process with that. I think when we start to talk about
02:02:19 cybersecurity, we got to focus on what is the data and is the data, the ones and zeros, the malware,
02:02:25 the detections to be able to eliminate attacks and how important that is to make sure that we
02:02:30 continue to make a difference with the technologies that we're building on the cyber war that we're
02:02:36 all out there fighting day in and day out. >> You also mentioned that reminds me of the
02:02:41 concept of secure by design. Is that something I know it's something that we need to be
02:02:46 contemplating as we analyze internally what to regulate and in what manner. Share with us if
02:02:52 you would a little bit more about secure by design and what that should mean to us as federal
02:02:58 policy makers. >> I think it goes back to the point that AI is here to stay no matter what any of us
02:03:03 do. It's coming and it's sort of like the internet. But we didn't plan security into the internet.
02:03:10 We didn't plan security into all of the applications and everything that was built out.
02:03:14 Instead what we're stuck doing and especially comes up as a cybersecurity company is we missed
02:03:21 out on an opportunity to build things in a secure way. And that's where when it comes to securing AI
02:03:28 by design you think about what are you building, how are you building it, what are you pulling in
02:03:33 from the supply chain into that build, how are you protecting the application as it's running,
02:03:37 how are you protecting the models, the data, everything as it's actually flowing out to
02:03:42 customers or otherwise. And I think that's where a really big focus on building things in a secure
02:03:48 way is really important. >> Thank you Mr. Chairman. I yield back. >> The gentlelady yields. I now
02:03:54 recognize Mr. Menendez for five minutes questioning. >> Thank you Chairman and thank you
02:04:00 Ranking Member for having us here today. Thank you to all the witnesses. I first want to start and
02:04:04 build off some of the questions that my colleagues have asked about training our cybersecurity
02:04:09 workforce. And first want to acknowledge Hudson County Community College which is New Jersey's
02:04:14 eighth congressional district. It is designated as a national center of academic excellence in
02:04:19 cybersecurity by the NSA. And they were recently awarded $600,000 to expand their renowned
02:04:25 cybersecurity program which is getting more women into the cybersecurity field. So incredibly proud
02:04:30 of the work that they're doing. Mr. Sikorsky, thank you for emphasizing the need to educate
02:04:34 and train our cyber workforce. I'm wondering in your experience what are you seeing as the most
02:04:39 prominent barriers to participation in the cyber workforce? >> I think there's a few parts to that.
02:04:48 I think the first is desire. I think that getting people at a much younger age, you know, focused on
02:04:56 getting excited about these technologies and wanting to get involved in what's happening.
02:05:02 And it goes back to what we discussed earlier which is how do we make sure they have proper
02:05:06 access and actually are talking about AI and cybersecurity at a very early age. And then I
02:05:12 think back to the point of what's happening in your district is very focused on how do we build
02:05:18 the university system such that they're really feeding the engine of all of these cybersecurity
02:05:24 workforce shortage jobs that we need to actually fill. So I think, you know, and working with
02:05:30 industry to make sure that they're lining up for the jobs because, you know, a lot of cybersecurity
02:05:36 companies actually struggle to find talent to hire into these jobs which go into, you know,
02:05:42 securing networks around the world, collaborating with CISA and other things that we're doing,
02:05:46 and there's just not enough people to pull from there. >> Yeah, because across public private
02:05:50 sectors everyone's looking to enhance their cybersecurity capabilities which includes
02:05:55 adding cybersecurity professionals, whether it's the Port Authority of New York and New Jersey or
02:05:59 whether it's a private entity that is concerned about these issues. Just sort of quickly following
02:06:05 back, you mentioned a desire at a younger age for people to engage in this field. I believe you're
02:06:10 an advisor to Columbia's cyber student organization. Are you seeing anything sort of in that student
02:06:16 demographic that draws them to cyber that maybe we should be sort of using to highlight and amplify
02:06:22 at a younger age? >> Yeah, that's a great question. I think that one thing I think about is the
02:06:29 gamification of it. I think of myself personally, I wanted to be a video game programmer originally
02:06:35 because that was really cool back in the Nintendo days, right, and I had the tech skills for it,
02:06:39 but then once I realized that, you know, cybersecurity is like this good versus evil
02:06:45 kind of situation going on in the real world and it's only going to get bigger, I started to get
02:06:50 really excited, and then there's hacking competitions and, like, you know, driving that
02:06:57 into getting people to participate more because of the fun that can be had and the team building
02:07:02 that can be had working towards that. That doesn't really exist out there, but that's what actually
02:07:06 happens. At Columbia University, there's a cybersecurity club where they focus on that,
02:07:11 they go to competitions together, and it rallies them into a common goal, and I think those types
02:07:17 of things are great. >> That's great. I appreciate it. I'm going to shift real quick to Mr. Demer.
02:07:21 In your written testimony, you touch on how AI can be used to better secure physical infrastructure,
02:07:28 and you specifically note the collection of high-quality, high-quantity data. I also sit
02:07:32 on the Transportation and Infrastructure Committee. We're overseeing the major investment
02:07:37 that's being utilized by the Infrastructure Investment and Jobs Act, which is building
02:07:43 our future infrastructure. I was commissioner at the Port Authority of New York and New Jersey.
02:07:46 One of my favorite projects was Restore the George, where they went through and cables,
02:07:50 and it was a completely intricate project, but necessary. As we think about investing in our
02:07:55 infrastructure so we don't have to make these huge investments of replacing our existing
02:07:59 infrastructure, how could the use of AI and the data collection that you touch on be used to
02:08:04 better upkeep our existing infrastructure? >> Thank you, Congressman. This is something
02:08:11 we care deeply about, of course, protecting the infrastructure we have today, capturing the right
02:08:16 data sets so that we can ensure that they are here for us when we need them, not vulnerable to,
02:08:21 you know, just wear old age or some external threat. But increasingly, we also see the
02:08:27 opportunity with these data sets to help us build more intelligently in the future. So as we bring
02:08:33 new systems online, how do we instrument them in ways where we can do that modernization and
02:08:39 telemetry of what's going on with that equipment so that we don't have sort of the failures of the
02:08:44 vulnerabilities and hopefully lower that cost of maintenance? Because two-thirds of the cost of
02:08:49 critical infrastructure is usually consumed after the initial build. >> That's right. And what type
02:08:55 of dollar amount of investment could the federal government make to quickly scale up this technology
02:08:59 and put it to use? >> So there are, you know, very much, you know, technology-ready solutions out
02:09:08 there for national security. And there are ways that, you know, I'd love to follow up with some
02:09:14 guidance on that in terms of programs that could be utilized to help bring those technologies
02:09:19 to the forefront and accelerate the adoption of them, whether it be investment in hardware,
02:09:25 technologies, as well as, you know, ensuring that the policy side is recognized that not all
02:09:33 solutions are created equal. And today we do a lot of things that seemingly placate the ability
02:09:41 to think that we have a handle on what's going on, but it's actually woefully inadequate in terms of
02:09:46 how we're actually managing, you know, maintaining. >> Sure. Would appreciate continuing this
02:09:50 conversation, but I am woefully out of time. So I yield back. >> The gentleman yields. I now
02:09:55 recognize the gentleman from North Carolina, Mr. Bishop. >> Mr. Chairman, I yield my time to you.
02:09:59 >> Thank you. Quick question on the creating a sense of security for the public on AI.
02:10:08 There's all the fear that's out there. And are there requirements that we can place in the system
02:10:14 that would give people a sense of security, kill switches, always having a person in the system?
02:10:19 What are some things that we can do to give the public a sense of security that we're not going
02:10:24 to create, you know, the Terminator or something like that? Anyone? You're smiling. >> I would say
02:10:34 human review and control is one of several factors that's critical for something like this. Again,
02:10:40 I think you need strong principles to ensure responsible use all the way from creation to
02:10:45 what data you're putting into systems and what systems you're using AI for to what data you're
02:10:51 taking out and how you're using it. As you said, you know, for human review, one of those steps
02:10:56 on the sort of on the outside is there should be human cooperation for AI results. It shouldn't
02:11:01 just be AI making its own decisions. And we have to know how reliable AI is in certain circumstances.
02:11:08 Sometimes it can provide small bits of insight. Sometimes it can be very reliable. Sometimes it
02:11:14 gives a degree and you have to treat it with a bit of skepticism, but also maybe it provides a bit
02:11:21 of value. Along those lines, not just human review, but especially trained staff. That's why it's so
02:11:26 important for folks that won't just overcome general automation bias. That's the tendency for
02:11:33 individuals to always just assume that automated systems tend to work better than humans. That's
02:11:37 been documented in many cases. But you need to understand what type of biases or what type of
02:11:42 reliability you might want to apply to any given AI system in any situation. >> Mr. Amlani, even
02:11:49 Mr. Alparuk has said some negative things about facial recognition earlier, I think if I heard it,
02:11:56 maybe not negative, but concerning. In the fairness, the use of it against large groups,
02:12:03 population stuff, law enforcement. What's your thoughts on the reliability of facial recognition?
02:12:11 And since this is your field, is there something I'm aware in my district of a company that has
02:12:18 the use of the three-dimensional vasculature in the hand, for example, which apparently according
02:12:23 to them hasn't failed, but facial recognition has failed. What are your thoughts on, because I think
02:12:30 you're right, it sort of begins and ends with being able to make sure the person is the person
02:12:34 in the system to make sure that, so what are your thoughts on that? >> Thank you, Mr. Chairman.
02:12:44 This is actually a really important question. From even going back to your prior question about
02:12:50 building confidence in individuals and systems and AI, I think it ties to that as well, which
02:12:56 is fundamentally post 9/11 after the terrorist attacks, one of the steps that we did is we
02:13:02 actually federalized the workforce and created TSA. Having federal agents at the checkpoints
02:13:08 actually made people feel safer to get back on airplanes again. That used to just be a private
02:13:13 contracted workforce paid for primarily by the airports and the airlines, but by having federal
02:13:17 agents there made people feel more comfortable. So these steps are really important to be able
02:13:21 to put into place. People do not feel comfortable right now with identification and authentication
02:13:26 steps that are necessary right now to access systems. Passwords are antiquated. People forget
02:13:31 passwords. The most secure place to actually store passwords, according to all of the major
02:13:35 cybersecurity experts in the field right now, is actually in a personal book that you write
02:13:41 with you because nobody overseas actually is trying to steal your personal book of passwords.
02:13:47 They can access your systems. They can steal centralized places where you store passwords,
02:13:52 but your book of passwords is something that they have very difficult access to receive.
02:13:56 And so leveraging better authentication mechanisms builds confidence in digital tools
02:14:02 and capabilities. People have become very comfortable with Face ID to be able to actually
02:14:07 secure their systems. People have no confidence in what Ranking Member Thompson mentioned with
02:14:14 regards to CAPTCHAs, right, and other silly systems like one-time passcodes that get sent to your
02:14:22 phone, which are very easily compromised. And so giving people that confidence with facial
02:14:27 matching and facial recognition and facial verification is an important component.
02:14:32 If I can jump in now, because I won't, in the time remaining, I have a quick question for Mr.
02:14:36 Skorsky. Data poisoning, how big of a threat is that for the AI systems that are using that data
02:14:44 to make very quick decisions, particularly in the defense world? And if we need to go somewhere else
02:14:50 to have that conversation, we can postpone it till then. But if you could just real quickly
02:14:54 share a few concerns about that, if you have any. Yeah, I think it goes back to the secure AI by
02:14:59 design, right? As we're building out these applications, how are we securing that
02:15:03 information? How are we securing the models themselves that make the decisions, as well as
02:15:08 the training data that goes there? And there is a lot of research and a lot of thought about what
02:15:16 attackers could do if they can manipulate that data, which would then in turn not even necessitate
02:15:21 an attack against the technology itself. It's an attack against the underlying data with which it's
02:15:26 trained with. And that's definitely a concern that needs to be taken into account and built into as
02:15:32 the technology is being built out. Thank you. I yield. And Mr. Goldman, the gentleman from New
02:15:39 York is recognized for five minutes. Thank you, Mr. Chairman. Thank you all for being here.
02:15:44 The 2024 Worldwide Threat Assessment warns that our most sophisticated adversaries, Russia, Iran,
02:15:53 and China, see the upcoming presidential election as an opportunity to sow disinformation,
02:15:58 divide Americans, and undercut our democracy. And many of our law enforcement and intelligence
02:16:05 agencies have confirmed that they are also seeing upcoming threats. Of course, it's not abstract.
02:16:10 We know Russia used social media and cyber to interfere in our election in 2016. We know that
02:16:20 Iranian actors posed as proud boys in an online operation aimed at intimidating voters in 2020.
02:16:27 And just recently, we learned that China is experimenting with AI to enhance their
02:16:33 influence operations. Within the Department of Homeland Security, CISA is charged with protecting
02:16:40 the security of our elections. And Mr. Sikorsky, I know you work closely with CISA on some of these
02:16:49 issues. And I'd love just to ask you a relatively open-ended question, which is how is CISA prepared
02:16:56 or how is our government writ large prepared to address the use of AI by foreign adversaries to
02:17:05 undermine and interfere in our elections? That's an excellent question, as we want to have a very
02:17:11 secure election, obviously. And I think CISA is doing a great job with the JCDC, which is
02:17:17 the collaboration with private industry, to work with us, Palo Alto Networks, and many other private
02:17:22 entities on thinking through like what are we actually seeing in the threat landscape. So one
02:17:27 of the things I'm tasked with as running the Threat Intelligence Division at Palo Alto Networks is
02:17:33 how do we take all of the information that we're getting from other private entities, from government
02:17:38 agencies around the world, bring that all together, and then share that back with the government
02:17:44 itself to say this is where we're seeing these threat actors go, right, whether it be what China
02:17:50 is up to today, what Russia is doing in the war in Ukraine, staying on top of those threats and
02:17:55 finding the new ones. For example, we saw, you know, a novel attack of how Russia was sending
02:18:02 emails, phishing emails, to Ukrainian embassies and actually finding, making that discovery and
02:18:09 showing how that actually went down. So that hyper-collaboration is definitely going to move
02:18:14 the needle. One of the biggest threats is deepfakes, which are, we know our intelligence
02:18:22 agencies said that they had spotted Chinese and Iranian deepfakes in 2020 that were not used,
02:18:30 but the FBI has recently identified that recent elections in Slovakia were impacted by the use
02:18:40 of deepfakes. How prepared are we to prohibit or prevent the use of deepfakes that might have a
02:18:51 significant impact on our election? Yeah, so I think what we've seen with AI, it really lowers
02:18:59 the bar for attackers to be able to generate believable communications, whether that's
02:19:04 emails, like I talked about earlier, phishing, whether it be voice or even deepfakes technology.
02:19:12 So I think that that lowering of the bar makes it a lot easier to make believable things that
02:19:17 they're going to select. And Palo Alto Networks, we're not hyper-focused on like, you know,
02:19:23 eliminating deepfake technology. And I think that that impact of inauthentic content is really
02:19:30 concerning and something we need to explore. Anyone else have any insight into this? Yes.
02:19:35 At iProof, we are actually obsessed with detecting deepfakes. That is actually what we do. We use
02:19:42 screen lighting from a cell phone screen or a desktop screen that reflects against live human
02:19:47 skin, calculating 3D geometry, making sure that it's a live human being, including skin translucency,
02:19:52 all simultaneously while you're actually recording a video or doing a biometric match
02:19:58 and verifying that it's a live human being and the right person at the right time. You know,
02:20:02 with being there at the creation of the video is an important component that you can then tag the
02:20:09 video and verify that it's in fact not a deepfake. And are you coordinating with CISA on this? No,
02:20:16 we have not been asked by CISA. Mr. Laparig? Yeah, beyond just the strict cyber risks, I mean,
02:20:22 deepfakes is something that our elections team and misinformation generalists highlight that
02:20:27 one of the risks is the liar's benefit that, you know, in addition to misinformation going out
02:20:31 there, then this is something that when you're evaluating truthful information, someone could
02:20:35 just say, no, that was just a deepfake. No, that was just misinformation. It's not just the initial
02:20:41 itself. Misinformation is trying to create an entire ecosystem of uncertainty. So,
02:20:45 just that's to emphasize why it's such a significant threat and why we need to make
02:20:50 sure that we have clear information and ways of trying to debunk this that are reliable.
02:20:55 Mr. Chairman, I know that you and, oh, he's not here, but you and many of the members on this
02:21:01 committee are concerned about election security. And I would just urge you to encourage some of
02:21:09 your colleagues who are trying to interfere with law enforcement's efforts to coordinate
02:21:16 election security and prevent election interference with the cyber companies who,
02:21:24 through which the adversaries do try to influence. And I hope that we don't see any more members of
02:21:34 the Republican Party trying to cut funding in CISA and that we work closely with CISA to make
02:21:39 sure that our elections are safe. And I yield back. Mr. Goldman yields. The chair now recognizes
02:21:44 Mr. Crane from Arizona for his five minutes. Thank you, Mr. Chairman. I realize that this
02:21:51 hearing is about utilizing AI to bolster homeland security. I want to bring up an interesting
02:21:59 meeting I just had in my office before coming to the committee. I was with one of the largest tech
02:22:05 companies in the United States and arguably global companies. And they were talking about
02:22:12 major cybersecurity attacks that they're seeing. I was told that they used to see attacks in the
02:22:19 range of tens of thousands a day. They're now seeing attacks to the tune of about 150,000 a day.
02:22:28 Mr. Demmer, Mr. Sikorsky, is that consistent with what you all are seeing and hearing
02:22:34 in your space as well, an increase in cyber attacks?
02:22:39 Yeah, we've, yes, that's a great question. We've actually seen a great increase in cyber attacks.
02:22:47 And the number of pure attacks that we're stopping in a per day basis across all of our customers,
02:22:55 which 65,000 customers, is, you know, in the billions. Now, actual net new, it's in the
02:23:03 millions. But that still gives you a sense of like, you know, how many new attacks are going on.
02:23:07 And then we see the cadence of ransomware attacks and extortion continuing to increase as all these
02:23:14 ransom gangs have evolved. This company also told me that not only are the numbers of attacks on a
02:23:22 daily basis increasing drastically, but they also said the brazenness of some of these foreign
02:23:30 actors is becoming more and more hostile and increased as well. Is that something that you
02:23:37 all can verify that you're seeing as well? Yes, I think the days of a ransomware attack,
02:23:44 where they just encrypt files, and you're just paying for a key, we actually miss those days.
02:23:50 Because now they're stealing the data and harassing our customers to get ransomware
02:23:57 payments. So they're taking that data, which has their customer information, patient data,
02:24:02 in some instances, threatening to leak that out, and really going to what I call a dark place of
02:24:08 like the level of harassment they're willing to go to, they're selling, sending flowers to
02:24:12 executives. And they're even going after companies, customers pretending to spam them as the company,
02:24:20 when in fact, they're the threat actor harassing their customers, and then leading to getting the
02:24:27 payment that they're after. My real question to you guys is what do you attribute this stark and
02:24:34 drastic rise in aggression and the amount of cyber attacks that we are now seeing against
02:24:41 our own corporations and also our infrastructure? Mr. Demer, I'll start with you.
02:24:48 So thank you, Congressman, for the question. My expertise really lies on the physical
02:24:55 infrastructure, the critical assets that we help maintain and protect. I can say that, you know,
02:25:00 the threats are arising on our critical energy systems on our infrastructure, like bridges,
02:25:07 and roadways and dams. And although we haven't seen the, you know, the pace of attacks that we
02:25:14 are seeing on the cyber side, it is a real vulnerability and a risk as our infrastructure
02:25:18 ages and has more vulnerabilities. And so it's something that our company is...
02:25:22 What about you, Mr. Sikorsky? I think the threat actors,
02:25:27 specifically in crimeware, when we talk about ransomware, have become a business where it's
02:25:32 actually not the same hacker who's breaking in and then doing the ransomware and everything else.
02:25:38 It's actually one group breaks in and then they sell that access on the dark web to ransomware
02:25:43 gangs, who are almost like franchises, like McDonald's or something, where they pop up,
02:25:48 they have a reputation score of how likely they're there to do what they say they're going
02:25:53 to do about giving the data back. And that enables them to build the relationships and get the access
02:25:59 they need. And so this almost... It's operated like a nine-to-five job for some of these ransomware
02:26:05 operators. Let me ask you something more pointedly. Do you believe that some of these
02:26:12 nation states and some of these foreign actors, do you think that they sense weakness here in
02:26:16 the United States? Do you think that has anything to do with it?
02:26:19 So I don't think that I could speak to if they sense weakness or not. I think it's more an
02:26:28 opportunistic thing from what I see. We've seen them leverage vulnerabilities.
02:26:32 Did those opportunities... Sir, did those opportunities... Were they not present a
02:26:37 couple of years ago? I would say those opportunities have always been present.
02:26:43 There's always been vulnerabilities in systems. However, the availability and opportunity for
02:26:48 them to figure out how to get in has increased. They are better enabled. And now that they're
02:26:53 operating in this model, it makes them more efficient and able to pull out over attacks.
02:27:00 Do you think we're doing enough offensively to make sure that individuals that would utilize
02:27:12 these types of technologies to attack corporations and U.S. infrastructure,
02:27:18 what do you think we could be a better job of doing to make sure that they pay a heavy price
02:27:22 if they're going to carry out these type of attacks against our country?
02:27:27 Yeah, that's a great question. I think when I think about... I'm not a policymaker as far as
02:27:35 thinking what's the best stick to have when it comes to dealing with a cyber threat. One of the
02:27:41 things that I'm always focused on is on the defensive side, and how do we make sure we're
02:27:46 doing everything we can to secure, and then opening all the lanes for sharing on the threat
02:27:50 intelligence front, which is big steps and strides we have made in recent years. The last few years,
02:27:55 all of this collaboration that's happening is moving the needle. And I think that'll help a
02:27:59 lot us stay in front of what the adversary is. That being said, we're in an arms race right now
02:28:04 on AI, right? And the defenders need to win here, because one opportunity we have is we could use
02:28:11 AI to remove all those vulnerabilities I was talking about before the threat actors can use
02:28:15 AI to find them. Thank you. Thank you for the extension. Mr. Chairman, I yield back.
02:28:20 The gentleman being last gets some extra grace, as did Mr. Goldman. The gentleman yields. I now
02:28:29 recognize Mr. Kennedy from New York for his five minutes of questioning.
02:28:33 Thank you, Chairman, and thank you to the Rancor, and thank you to the panel today for your
02:28:39 testimony. We're hearing a lot about advancements in AI and the upcoming election. It's a historic
02:28:47 election. We want to make sure that it is secure. And as November approaches, there's more and more
02:28:56 concern about those that would seek to undermine our election. As a matter of fact, just last week,
02:29:01 a report from the Department of Homeland Security warned, I'll quote, "As the 2024 election cycle
02:29:07 progresses, generative AI tools likely provide both domestic and foreign threat actors with
02:29:14 enhanced opportunities for interference by aggravating emergent events, disrupting
02:29:20 election processes, or attacking election infrastructure." So I worry about the rapid
02:29:27 advance as having an impact on the election, as we're discussing here today. The Cybersecurity
02:29:34 and Infrastructure Security Agency is responsible for election security. Mr. Leperuk, how can this
02:29:42 agency best support election officials and United States voters so that they can trust the authenticity
02:29:51 of the content that they're seeing when they see information online? Well, as I said before,
02:30:00 I think this is an area where we can learn a lot of lessons from what we've taken in
02:30:04 cybersecurity space over the last five to ten years, using these type of federal agencies to
02:30:09 distribute information out to the huge range of local election administrators,
02:30:17 as well as the public. And that can come from both providing education about how these technologies
02:30:23 work, how threats work, what to be aware of, as well as information about specific threats.
02:30:29 If there is something on our monitor, some sort of common deep fake technique, some sort of
02:30:34 new type of attack, providing information in that field. It's something where,
02:30:39 because our election system is so decentralized, and because there is a lot of information out
02:30:44 there, sort of acting as that hub of getting out good and useful information and warnings can be
02:30:49 very important. And what do you foresee is a role for the cybersecurity and security agency
02:30:55 in authenticating content online in regards to the upcoming election?
02:31:01 I think that would be a much more challenging question if we're talking about just content from
02:31:07 any layperson as opposed to a message that may be targeted at an election administrator.
02:31:14 That's something that our elections team, which is separate from my work, does a lot of research
02:31:20 into. So I would encourage continued work with them. They're always happy to provide
02:31:25 thought and detailed research into this space. Great. Thank you. And then, Chairman, I know
02:31:34 Chairman Green earlier mentioned his community, but I want to plug the University at Buffalo,
02:31:41 which established the UB Institute for Artificial Intelligence and Data Science
02:31:46 to explore ways to combine machines, ability to ingest and connect. In 2021, just this past year,
02:31:53 the New York State budget included $275 million for the University at Buffalo's new AI Center. So,
02:32:03 Mr. Amlani, when you're searching for universities with your 15-year-old,
02:32:10 you should look at Buffalo and the engineering school that they have there right in the heart
02:32:14 of my community. But how can we better harness the institutes of higher ed, especially our public
02:32:19 institutions, for getting at this cutting-edge technology? And while they're training up our
02:32:27 youth in this new technology, how do we make sure that they're developing it in a safe way regarding
02:32:34 AI? So, my thank you so much for your question, Congressman. My son and I would love to come to
02:32:45 Buffalo. We would also love to be able to visit Niagara Falls. You're always welcome. Thank you.
02:32:50 Fundamentally, I think there is a level of distrust in some of the AI content created,
02:32:58 and much of that distrust is not knowing who or where the content came from. And so,
02:33:03 it comes down to identity, properly identifying the creator and properly verifying the content
02:33:09 has not been tampered with after it was created by the initial creator. And so, making sure that
02:33:15 you're able to identify the initial creator is a very important component on trusting all of this
02:33:20 content, and also for intellectual property concerns and other aspects of things that we've
02:33:25 already discussed here today. And so, using proper identification tools, things that can verify that
02:33:31 it's the right person, it's the real person, and it's right now, to allow that individual to have
02:33:36 ease of use to be able to identify themselves and verify they are who they say they are,
02:33:41 and the content is coming from them. Thank you. I yield my time.
02:33:43 Gentleman yields. I now recognize myself five minutes. So, for the purpose of, you know,
02:33:52 this committee, I think a lot of the questions are centered around the border and how AI can
02:33:58 help us be able to identify fraud, also protect entities within the United States. That's where
02:34:05 I want to go here initially. The ability of your company to help bolster border security,
02:34:12 for those of you that are representing the free market, I really, Mr. Sikorsky,
02:34:18 you had alluded to, I've got some information here, where AI-powered co-pilots, AI-powered
02:34:27 co-pilots, my understanding it's a natural language assistance that can yield, you guys state,
02:34:34 you know, multiple operational advantages. I'm actually curious as to when we're doing our
02:34:39 initial credible fear interviews, asylum claims, if our border security officials can utilize that
02:34:47 tool to better inform the cases of the future so that as people are being coached for those
02:34:54 interviews, what can be, you know, language barrier can exist and they can be told to say
02:35:00 certain key words. Are you all or anyone on the panel aware of how we can make sure that we can
02:35:07 get to the depth of fraud for our officers to be able to use AI to be able to get past a language
02:35:14 barrier that can be used as a defensive mechanism to the benefit of those who are applying,
02:35:18 help us get to the truth of the matter? Yeah, that's a great question. I think I'll take from
02:35:24 my experience helping to create that technology at Palo Alto Networks. So what we say co-pilot,
02:35:31 what we mean is you have a human who's using technology to find new threats, right? That's
02:35:36 what we sell. But it's very difficult to use. So what we end up doing is we built these technologies
02:35:43 called co-pilots for our products so that users can engage very quickly to ask questions, just,
02:35:50 you know, very and then the AI will process the question and give them the answer that's there so
02:35:54 that they don't have to figure out all of the complex complexities of our product. I think
02:36:00 what you can do is you can take what we've done in that capacity and apply that to a use case like
02:36:06 what you're talking about where you can say they get a co-pilot and they're paying attention to
02:36:11 what they're collecting, what information and the AI maybe says to them, hey, you know, I saw it
02:36:17 differently than you saw it when I feed all this in and put it against my model. And then you can
02:36:23 put the two side by side and it's sort of like Iron Man where you put on the suit and you're
02:36:27 more powerful than you were before. And that's sort of one area I would think to go into.
02:36:32 So to elaborate, if I'm understanding you correctly, you do think that CBP
02:36:36 officials could be empowered in those moments to help inform the case.
02:36:44 But I'm assuming there's also the defensive mechanism that someone crossing illegally
02:36:49 with a credible fear asylum claim wouldn't be fraudulent could also use that technology to
02:36:54 time delay, read the question and then I'm assuming the technology is moving there where
02:37:01 they can get a response to help their case also that's AI generated. Yeah, I'm not, I'm definitely
02:37:07 not an expert on, you know, the border and how those policies are to figure out, you know, what
02:37:13 the best way is to on immigration and other things. However, the thought process is the types
02:37:19 of things we're doing with AI to enable customers or prevent cyber threats. Could the model is
02:37:27 similar for really anybody who's collecting data to make better decisions that they might not
02:37:33 otherwise do by themselves. Let me move on the rise of AI, the significant question about,
02:37:39 you know, concerns constitutional election. AI is can be very dangerous. We know with China,
02:37:47 social credit scores and the ability to have facial recognition AI powered that then may
02:37:53 produce a different outcome depending upon whether you adhere to the government's positions or not.
02:37:58 So there's a real fear and it's a credible fear out there for those of us that think this could
02:38:05 be used to weaponize against a nation's citizenry. I want to ask from your perspective the significant
02:38:12 concerns you have regarding the AI of the future and if anybody wants to talk about
02:38:18 election fraud, specifically candidates that can be created to look like they're saying something
02:38:25 that they're not actually saying and how that can be generated, the ability for a user, I'm
02:38:32 talking about government regulation here, but for a user to have a fact check, something that could
02:38:38 be encrypted where I could have the ability to determine whether or not that was created or if
02:38:43 that's an actual video, anybody wants to respond to that? Sure, I think the identity of the creator
02:38:53 is actually a very important component up front, Mr. Chairman, Mr. Congressman, and I believe that
02:39:01 right now that is not actually associated with most of the videos. It is shared openly,
02:39:06 it is shared on open sites, and you can never really tell who created the original video.
02:39:12 If you had the ability to be able to do so, at least you would have some confidence that the
02:39:17 video itself was actually created originally by the initial creator. But there is some watermarking
02:39:23 tools and other technologies being used by other companies that there's some investment in that
02:39:29 space currently today to be able to assess it. Interesting. I want to honor the time here with
02:39:34 that. I yield to the gentlelady from New York, Ms. Clark, for her five minutes.
02:39:39 Thank you, Mr. Chairman, and I thank our ranking member in absentia for holding this important
02:39:48 hearing. Let me also thank our panel of witnesses for bringing their expertise to bear today.
02:39:55 The rapid advancements in artificial intelligence and cybersecurity represent significant new
02:40:01 opportunities and challenges with respect to securing the homeland. Every day, developers
02:40:06 are creating new AI tools to better identify and respond to the increasing number of cyber attacks.
02:40:11 But while AI is a critical tool in our cyber defenses, it is also still created and deployed
02:40:19 by human beings. AI systems are trained on data sets which often replicate human biases.
02:40:25 And thus, the bias is built into the very technology created to improve our lives and
02:40:31 keep us safe. For example, AI designed for law enforcement purposes that was trained on historic
02:40:38 police or crime data may serve to reproduce or expand existing inequities in policing, as those
02:40:48 are not an accurate reflection of crime, but rather of police activity and crime reporting,
02:40:55 which can be fraught with bias. While artificial intelligence and cybersecurity will remain
02:41:01 important elements defending the country, these are not just technological issues, but critical
02:41:07 civil and human rights issues as well. To that end, Mr. Chairman, I ask unanimous consent to
02:41:13 enter into the record an insightful article on this topic published in WIRE.
02:41:18 Without objection, so ordered. Thank you, Mr. Chairman. This article,
02:41:23 written by Nicole Tisdale, a former staffer of this committee in the National Security Council,
02:41:28 provides valuable context on the societal impacts of cyber threats, the need for inclusive
02:41:33 cybersecurity strategies. Developers and deployers of artificial intelligence, whether in the realm of
02:41:39 securing the homeland or providing a service to consumers, must be thoughtful and intentional in
02:41:45 the creation and rollout of this technology. Similarly, we as policymakers must be deliberate
02:41:51 and meticulous in our choices as we work towards major legislative efforts, such as creating a
02:41:57 framework for the use of AI in securing the homeland, as well as in crafting a comprehensive
02:42:03 data privacy standard, which remains foundational to our work on AI and the range of other issues.
02:42:09 We must all take care to ensure that civil rights and the principles of our democracy are baked
02:42:14 into this emerging technology, because once it gets out, there will be no putting the genie back
02:42:20 in the bottle. Mr. Sikorsky, you've encouraged the government to consider different standards
02:42:26 for different use cases, recognizing that certain uses of AI may not pose the same civil liberties
02:42:33 concerns as others. Can you elaborate on why you think such differentiation is important,
02:42:39 and how you think the federal government should identify which uses of AI should trigger heightened
02:42:44 security while keeping American civil rights top of mind? That's an excellent question, Congresswoman.
02:42:52 I think about, you know, whenever you're applying security to something, people see it as an
02:42:59 inconvenience. They don't want to do it. So, and especially when it comes to innovation as well,
02:43:05 people are moving very fast with this technology, and we need to think about security as we do it,
02:43:10 rather than just rushing to get it out there, because like you said, it's unstoppable once
02:43:15 it's out, right? The genie's out of the bottle. So, I think that when I look at cyber security
02:43:21 specifically, and the defense challenges we have, we spoke earlier about the amount of threats and
02:43:27 the amount of attacks going up over time, and that our adversaries, we also touched upon,
02:43:35 are leveraging AI and trying to figure out how they could produce more attacks leveraging AI.
02:43:40 So, it's very important, I think, to stay, keep innovating very quickly on the security side,
02:43:46 and focus on, well, that data is the ones and the zeros, the malware, what's happening on a system,
02:43:53 what is vulnerable, and using that as the inputs to make the decisions. I think when it comes to
02:43:58 things like you mentioned, policing, you could go to employment, credit, education, college
02:44:05 application, wherever this ends up going, it's important to really take a look at those, you know,
02:44:11 high-risk cases, and take an approach to regulation that has a purpose and why you're doing it,
02:44:19 because in those impacts, it's really impacting people's lives in a way that maybe with cyber
02:44:25 security, it's like we're helping people's lives by preventing them getting attacked.
02:44:30 Very well, thank you. Mr. Uproach, do you agree that we should have different standards for
02:44:39 different use cases, and what use cases do you believe pose the greatest civil rights and privacy
02:44:45 risks? And other witnesses are also welcome to chime in. There are certainly some types of
02:44:53 oversight that should be universal, you know, the principles I mentioned for ensuring accurate
02:44:58 results and efficacy, you'd want to apply across the board, but at the same time, there are certain
02:45:02 forms of AI that do create heightened risk to individual rights and liberties that we do need
02:45:07 to have heightened standards and safeguards for. Any type of AI, for example, that might be used to
02:45:13 help make a designation that an individual should be the target of an investigation,
02:45:18 subject to surveillance, or arrested, that's an obvious example where there's severe rights
02:45:23 impacting ramifications to a use of AI that would warrant heightened scrutiny and a number of extra
02:45:32 safeguards you would probably want to have in place to make sure that uses are efficacious and
02:45:36 that you are not having problems such as the biases that you've mentioned.
02:45:42 Gentlelady yields. With that, I want to thank the witnesses for your valuable testimony before
02:45:48 this committee today. We would ask the witnesses to respond to any additional questions in writing
02:45:57 pursuant to committee rule 7d. The hearing record will be open for 10 days and without objection,
02:46:04 this committee now stands adjourned.

Recommended