On June 4, 2014, a group of Lockheed Martin Space Systems research leaders visited UAH’s ISEEM department for a presentation on current research being performed at the Manned/Unmanned Collaborative Systems Integration Laboratory. ISEEM and the US Army Research Laboratory (ARL) are collaborating to study both theory and implementation in areas such as manned/unmanned teaming, Cognitively Tailored Interfaces, and Visual Media Reasoning. Jeffrey Hansberger and Jared Sapp of ARL presented their research to Lockheed’s representatives in efforts to encourage future collaboration among Lockheed, ISEEM, and the Army.
Major Topics Discussed Were The Following:
1. One primary goal for the Manned/Unmanned Collaborative Systems Integration Laboratory is to research the tactics, techniques and procedures used by Soldiers and determine the mental resources needed to manage their attention along with their ability to coordinate crew activities and communication throughout the battlefield. The laboratory currently has a network of nine workstations connected via Virtual Battlespace 2. One workstation serves as a scenario command station which can monitor the activity of all participants as well as modify the virtual environment in real time. This high-fidelity simulation environment is used to explore methods of increasing the effectiveness of Soldier teaming, decision making, and overall performance as well as developing methods of controlling future unmanned systems. An experiment is currently being developed to determine the ability of gunners to judge the relative distance and orientation of targets at range, leveraging the VBS2 environment.
2. The tactical and operational environment for the Army is changing with an ever growing emphasis on, and need for information. This has created information overload challenges for the soldier. Most information and Soldier systems are viewed through a computer interface but these interfaces are typically underdeveloped or not considered as a vital component in the Soldier system. There is a large amount of diversity in how people perceive information, how they store information, and how they process information. Instead of ignoring this diversity, there is great potential in understanding and capitalizing this diversity, if the way an interface organizes and presents information could be tailored to each individual and their own cognitive style. This research effort attempts to understand, measure, and design a system to tailor itself according to the information processing strengths and weaknesses of each individual user and Soldier.
3. Adversaries often take photos and videos to claim responsibility for events or to illustrate capabilities. This media is sometimes collected by the DoD from a variety of devices, including laptops, cellphone cameras and memory cards. The volume of this visual media is quickly outpacing our ability to review, let alone analyze the contents of every image. DARPA's Visual Media Reasoning (VMR) is a software system that lets users ask queries of photo content, such as "What make and model of vehicle is that?" or "Is this person on our terrorist watch list?" or "Where is this building located?" The Army Research Lab is experimenting with new interface designs and methods of interaction with large quantities of visual media to support the VMR analysis capability and drastically reduce time and effort required by Intelligence analysts.