Public Perceptions of Artificial Intelligence for Homeland Security
Artificial intelligence (AI) systems could be crucial in supporting the U.S. Department of Homeland Security's (DHS's) core missions. DHS already uses AI in homeland security missions, and it seeks to further integrate emerging AI capabilities in other applications across DHS components. However, the full potential of DHS use of emerging AI technologies is subject to several constraints, one of which is how people view government use of those technologies. Public perception of government use of technology is important for several reasons, such as to establish trust in and legitimacy of the government, to facilitate necessary funding and legislative support from Congress, and to foster collaboration with technology companies and operational partners. Some of these key stakeholders have raised concerns about DHS use of AI technologies, including risks that DHS applications violate privacy and civil liberties, exacerbate inequity, and lack appropriate oversight and other safeguards. These concerns could shape or restrict DHS use of technology, so it is important that DHS understand the extent to which the public agrees with the department's approach to addressing these concerns. Researchers sought to evaluate public perception of the benefits and risks of DHS use of AI technologies. They developed a survey in 2020 with questions about current and planned DHS use of AI technologies, with a focus on four types of technologies: face recognition technology (FRT), license plate-reader technology, risk-assessment technology, and mobile phone location data. The survey was fielded using the RAND American Life Panel, a nationally representative panel of the American public.