Applying ML to Cybersecurity
This project uses NLP and ML techniques on open-source intelligence to guide predictive analysis of compound exposures in contextualized settings. Work is underway to develop strategies for the automatic extraction and synthesis of functional exploit patterns from vulnerability descriptions in open-source vulnerability repositories such as the National Vulnerability Database (NVD). The goal is to provide modeling and analysis support for intelligence that contextualizes threats using attack graphs and other relevant structures. The next stage of research aspires to develop new machine learning-based analyses centered on the notion of attack motifs, which can be thought of as “micro-attack graphs” that exhibit common, repeated, and composable behavior in the attack space. Mentor: Dr. John Hale, firstname.lastname@example.org
Kernel-Level Security Policy Management
This project seeks to define an open standard for enterprise-level security policy management with kernel-level enforcement. The architecture uses the Berkeley Packet Filter (BPF) and the Linux Security Module (LSM) to support the remote management of low-level Mandatory Access Controls for devices over a network. The effort aspires to support a range of policies and models while relieving administrators of tedious and error-prone manual security policy management processes. Mentor: Dr. John Hale, email@example.com
Assessing and Improving Organizational Cybersecurity Hygiene and Culture in the Perimeterless World
There is no shortage of challenging cyber security problems at the interface of technology, people, and the organization. Promoting a strong security culture is a widely professed goal of many organizations yet evaluating whether these objectives are being met can be hard. Thus, this project will focus on developing evidence-based security hygiene and culture assessment by identifying and evaluating different data sources and developing metrics to capture existing organizational security hygiene and security culture. With an understanding of the security challenges of the workforce and end-users and a baseline of the current security culture, organizations need clear and proven methods to effect measurable and lasting security hygiene and culture improvements. Accordingly, the research team will develop and empirically validate security hygiene and security culture change methodologies. This project will work with real organizations to achieve quantifiable results supported by both objective and qualitative evidence. Additionally, the global pandemic has radically changed how and where work is conducted. This project will also examine the impact of the current and emerging remote / flex workforce on employee cyber security behaviors and activities. One area to investigate centers on the cyber security challenges that arise from the flexibility given to workers across the organizational hierarchy to potentially work from anywhere. In this case, organizations must assess and likely adapt their view of adequate security hygiene and the impact on their security culture.
Mentors: Dr. Sal Aurigemma – firstname.lastname@example.org, Dr. Tyler Moore – email@example.com, Dr. Bradley Brummel – firstname.lastname@example.org
VR Training Simulation Framework
Over the past two decades, there has been an increase in the development of virtual reality (VR) training solutions for a variety of fields such as medical, construction, natural disaster training, and education. With VR technologies becoming more affordable and mainstream, the demand for VR training is projected to increase exponentially in the coming years. While VR training can provide an effective and immersive real-world training experience through virtual environments, the implementation has been a slow process. The habiting factors to VR training solutions are the expensive upfront costs for custom training and the extensive development process. This project aims to create a viable framework used for developing different types of VR training simulations. Unlike current quest-based frameworks for games, this objective-based framework will allow educators to develop simulations that have multiple correct answers, ordered and unordered series of steps, and valued accuracy of each procedure. Built as an extension for an industry-standard game engine, this VR objective-based framework will dramatically decrease the development time for creating training simulations. Educators who are familiar with game development tools will also gain the ability to quickly develop their own training solutions. Mentors: Akram Taghavi-Burris – email@example.com, Dr. John Hale – firstname.lastname@example.org
Trusted AI Through Personalized Explanations: Perx & EXPLORE
As AI/ML systems become increasingly prevalent in our daily lives, non-experts will demand that these systems explain the options and recommendations they offer. Building on prior research on trusted human-AI collaboration and transparent decision-making in our MASTERS group, we will develop two complementary frameworks in the increasingly critical area of Explainable Artificial Intelligence (XAI): Personalized Explanation Systems (Perx) and Explaining Options & Recommendations (EXPLORE). While Perx, makes explanations accessible, meaningful, and adapted to the needs and preferences of the user, EXPLORE develops new guidelines and tools for designing and developing AI systems with in-built explanation capabilities. Though these systems can work independently, ideally a Perx-based front end will be paired with an EXPLORE-based back-end to optimize user experience.
Mentor: Dr. Sandip Sen – email@example.com
Leveraging Attack Graph State Estimation for Cyber Defense
Attack graphs document how a system can be compromised. Recent efforts have shown that it is possible to estimate the states in the attack graph a system based on current observable system characteristics. This can be leveraged to build and deploy cyber defense tools that continuously monitor the system and can adapt to changing conditions. The attacker’s potential targets and next steps are estimated from this set of states and recommendations for responding are generated based on this information. These could include automatic modifications to the system to counter these threats or to influence how the attackers progress (e.g., better evidence collection or to contain damage). Initially, a testbed will be developed to allow control over what data are observable to determine the minimum set of observable parameters that are needed to make quality recommendations so the system can adapt. Efforts to score or quantify the benefit of each component to making these recommendations will be investigated and incorporated into the attack graph structure. This will provide the ability for the system to adapt itself to provide more information to improve the attack graph state estimation. Methods to generate recommendations to modify the system may require generation of new components of the attack graph to model and identify any compromises added because of the adaptions. The generation algorithms and attack graph structure will be modified to keep the expansion localized provide recommendations in a timely fashion. Mentor: Dr. Peter J. Hawrylak – firstname.lastname@example.org
An Interpretable and Trustworthy AI Framework for Smart Grid Cyberattack Detection and Recovery
The emerging new technologies that promote consumer participation in power networks rely on advanced communication protocols, sensors, and data-intensive algorithms. Such transformations will increase the complexity of the power systems, which require effective cybersecurity measures to mitigate the adverse effect of cyber threats. In this project, we propose a novel interpretable and trustworthy machine learning framework that detects fault and cyberattack incidents and recovers the electric grid from these critical system events in a real-time fashion. The proposed model captures a sparse set of spatiotemporal features of power system measurements by incorporating dictionary leaning into deep generative modeling. The sparse features are further used by discriminative deep neural networks to capture faults and cyberattacks and recover the system in a real-time fashion. Moreover, an interpretable attention-based technique will be developed to find the spatial and temporal features that are most related to the detection and recovery tasks. Hence, the research provides an interpretable knowledge base that enhances the reliability of the proposed framework. Mentor: Dr. Mahdi Khodyar – email@example.com
Detecting Natural Gas Emissions to the Atmosphere
Natural gas is a growing energy source that presents lower global warming emissions as compared to coal and oil. Unfortunately, natural gas infrastructure (pipelines and facilities) could generate methane emissions that can potentially offset the climate benefit of replacing oil and coal with natural gas. The loss of natural gas to the atmosphere could have an impact estimated to be 86 times larger to global warming potential as compared to CO2 over the first 20 years period. Leak detection and repair (LDAR) programs are enforced by many regulation entities, making the detection process essential for the industry.
This project seeks to build a platform to help detect and locate natural gas emissions. The platform will be a distributed network of embedded systems where each node uses multiple sensors (gas, temperature, pressure, wind speed, geolocation) and has wireless capabilities for communication.
Software will be developed to allow sensors, covering a potentially large geographical area, to form a mesh network that interconnects the different nodes. The project envisions the participation of three Ph.D. Students, one in computer science (CS), one in electrical and computer engineering (ECE) and one in petroleum engineering (PE). The CS student will be in charge of the network design and configuration. The ECE student will design and build the embedded systems used by the project. Finally, the PE student will be in charge of the sensor calibration and running models to detect and locate the methane emission. Mentors: Dr. Peter J. Hawrylak – firstname.lastname@example.org, Dr. Mauricio Papa – email@example.com, Dr. Eduardo Pereyra firstname.lastname@example.org