The Automated Virtual Agent for Truth Assessments in Real Time (AVATAR), developed at the University of Arizona’s National Center for Border Security and Immigration (BORDERS), has been undergoing tests with the Canadian Border Services Agency (CBSA). Prior to the recent tests with CBSA, AVATAR had been tested at a U.S.-Mexico SENTRI lane in Nogales, Arizona in 2011 and in 2012.
What is AVATAR?
AVATAR is a kiosk-like system designed and spearheaded by the BORDERS project group led by then Ph.D. Student, Aaron Elkins. The system was developed to automate credibility assessment interviews by Customs Authorities at Ports of Entry, such as is done in visa processing or asylum requests, as well as personnel screening.
The creators of AVATAR say that by utilizing artificial intelligence and non-invasive sensor technologies, the tool is able to flag suspicious or anomalous behavior for further investigation by trained human agents. The system makes this determination by sifting through mountains of data, which includes: voice, eye, facial, and posture. The data is collected from cameras and other non-invasive sensors which are integrated into the kiosk. The designers of the system believe that, by automating interviews, document/biometric collection, and screening tasks, the tool could enhance DHS and CBP operations by freeing up personnel for other mission-critical tasks.
CBP Field Tests
From 2011 to 2012, BORDERS conducted two proof-of-concept field tests of the AVATAR system at Nogales, Arizona. The pilot studies were funded by the Department of Homeland Security Science and Technology Directorate (DHS S&T). BORDERS met with several CBP officials to approve their test of the tool at a U.S. Port of entry with the end goal of receiving data to enhance the system’s operational efficiency. Specifically, BORDERS wanted to test the system at a trusted traveler application processing center and determine what tools or improvements were needed to move the AVATAR system beyond the proof-of-concept stage. CBP officials authorized the deployment of the system at the Nogales Enrollment Center at the DeConcini Port of entry. The team was provided six weeks to integrate the kiosk with existing CBP processes and collect data. These tests would be the first tests where the system was utilized in a real-world situation.
Throughout the tests, a CBP Officer was present to monitor the avatar-administered interviews. The interviews consisted of 20 questions, which were conducted and recorded by the kiosk. The attending officers would see the results of this interview on a tablet which connected with the AVATAR system. Officers are provided a color-coded record of the interaction with normal responses coded as green and anomalous or suspicious behaviors coded as red. Throughout the course of the pilot, the system conducted over 250 interviews. The team collected valuable data on the systems limitations for further improvement. These limitations include the system’s ability to analyze speech from interviewees speaking before the AVATAR completed its question, as well as concerns regarding ambient noise.
The initial pilot at Nogales gave the researchers a lot of data that they have since incorporated into the updates for the AVATAR system, including better understanding of additional languages (primarily Spanish), and faster response recording. Since the pilot, the system’s deception detection rate is estimated to have an accuracy between 60% to 75% on average, which is better than the human rate of 54% to 60%. Aaron Elkins has also moved the project from the University of Arizona to the San Diego State University, where he accepted a position as an Assistant Professor in 2016. He is currently developing a new lab for further research and development of the AVATAR tool at the University.