CAREER: Black-Box Learning of Web Application Authorization Policies

Authorization policies are responsible for ensuring that different application users are only allowed to access resources to which they are supposed to have access. Broken authorization has been consistently one of the most frequent vulnerabilities in web applications. Unfortunately, most web applications do not provide an adequate specification of the authorization policies that they enforce. Therefore, it is challenging for deployers and end users of those applications to ensure the safety and security of their data. In this project, we propose to learn authorization policies from web applications by observing their authorization behavior and without relying on having access to their code and other internal complexities. Such learned policies can then be used for further analysis of privacy and security of applications. This project has been funded by National Science Foundation (NSF).
Read More →

Modeling information exposure in online social networks

In online social networks (OSNs), the privacy of users is impacted by exposure of information about those users to other users of the system. Various factors, including design and user behavior, may affect the degree to which information about users is exposed. We propose the notion of knowledge exposure that measures the probability that information about users will be seen by others. We argue that such a measure can give OSN users and designers insight about how privacy is affected based on system design and user behavior. We present exposure as a promising notion that can complement privacy control efforts in an OSN rather than replacing existing measures such as access control.
Read More →

Mining complex access control policies

An access control policy determines the authorized actions of subjects/users on objects/resources in a system. Modern access control policy models provide abstract concepts such as roles and condition-based rules for flexible specification of the policy. What if we have already an access control policy implementation (e.g., access control lists) and want to adopt a more modern and flexible model (e.g., attribute-based access control)? Manually replicating an existing authorization policy in terms of a new policy model is laborious, error-prone, and can lead to inefficient policies. The policy mining problem is about developing algorithms to achieve this task in an automated and efficient manner. In this project, we look into mining policies in the context of flexible access control policy models such as attribute-based access control and relationship-based access control. One of the exciting subproblems that we are investigating is mining policies containing conflicting positive and negative authorization rules.
Read More →

Verifying relationship-based access control policies

Relationship-based access control (ReBAC) policies can express intricate protection requirements in terms of relationships among users and resources (which can be modeled as a graph). Such policies are useful in domains beyond online social networks. However, given the updating graph of users and resources in a system and expressive conditions in access control policy rules, it can be very challenging for security administrators to envision what can (or cannot) happen as the protection system evolves. For example, if we use ReBAC in a medical domain, can we reason that at any time in the future all involved physicians in a treatment case can have full access to the treatment data? We introduce the security analysis problem for this class of policies, using which we seek to answer security queries about future states of the system graph and authorizations that are decided accordingly.
Read More →

Anonymizing social network data

Study of social networks is growing in different domains such as academia, business, and even government, in order to identify interesting patterns at either the node or network levels. In many social network datasets, the exact identity of the involved people does not matter to the purpose of the study. Yet such datasets may carry sensitive information, and hence adequate measures should be in place to ensure protection against reidentification. Recent work in the literature has shown that structural patterns can assist in reidentification attacks on naively-anonymized social networks. Consequently, there have been proposals to anonymize networks in terms of structure to avoid such attacks. However, such methods usually introduce a large amount of distortion to the social network datasets, thus, raising serious questions about their utility for useful social network analysis. This research focuses on improving anonymization methods in terms of utility without sacrificing the privacy guarantees.
Read More →

PANACEA: Personalized AutoNomous Agents Countering Social Engineering Attacks

Our computing systems have become significantly secure nowadays. Therefore, attackers have turned to compromising a typically weaker (less secure) element of our systems: humans. Through social engineering techniques, human users can be persuaded to help attackers achieve their goals, be it by supplying their authentication credentials or divulging sensitive corporate information. We develop intelligent solutions to detect and counter social engineering attacks based on machine learning and natural language processing techniques. This project has been funded by Defense Advanced Research Projects Agency (DARPA).
Read More →

Anonymizing Location-Rich Data

Many systems collect and leverage location information and movement traces today, ranging from search engines that retrieve results relevant to your location to online social networks for explicitly sharing your location. However, your whereabouts can reveal a lot about you. An adversary may reidentify you in a location-rich dataset based on your location even if data is anonymized. We have explored preserving user privacy in two areas: anonymizing location-based queries that are submitted to Location-Based Services (LBSs), and anonymizing datasets collected by geosocial networking systems (GSNSs).
Read More →

Cybersecurity and Forensics

Do you encounter cyberattacks such as email phishing? => Yes. We have solutions for you! This research focuses on detecting and preventing phishing attacks by leveraging lightweight search features and secure authentication techniques to improve browser security. The approach includes a system that employs efficient search methods for phishing detection and a Bluetooth Low Energy (BLE) authentication scheme to safeguard against phishing attempts through browser extensions. Furthermore, the study investigates the use of volatile memory forensics for real-time phishing detection.
Read More →

Bias-aware Gaze Uniformity Assessment in Group Images

Do you capture photos in group settings? => Yes. Are there multiple individuals taking pictures of the group? => Often. Do all members of the group face the same camera or gaze in the same direction? => Aahhh... No. Don't worry, we have GARGI for you! Since the advent of the smartphone, the number of group images taken every day is rising exponentially. A group image consists of more than one person in it. While taking the group picture, photographers usually struggle to make sure that every person looks in the same direction, typically toward the camera. This occurs more often when multiple photographers take pictures of the same group. The direction in which a person looks is called the gaze. Gaze uniformity in group images is an important criterion to determine their aesthetic quality. The objective of this research is to invent a method to achieve uniform gazes among individuals in group photographs to enhance overall aesthetic quality.
Read More →

SecureCSuite: Secure Computations over Untrusted Cloud Servers

Do you use cloud servers for storing and processing your data? Your data could potentially be misused. We offer a solution that enables secure processing of encrypted data, ensuring both security and privacy. Presently, it is a prevalent practice to delegate both small and large-scale data storage and computational responsibilities to third-party high-performance computing servers like cloud data centers. While these solutions offer highly scalable and virtualized resources for efficient service execution, concerns regarding security and privacy arise due to potential untrustworthiness of these third-party service providers. In this context, this project introduces a framework addressing secure cloud-based computations named the SecureCSuite framework. The primary objective of the SecureCSuite framework is to execute required tasks on encrypted data, thereby upholding the security and privacy of the data. We instantiated this framework for various tasks, including image/video scaling (SecureCScale), enhancement (SecureCEnhance), document editing (SecureCEdit), PDF merging (SecureCMerge), searching (SecureCSearch), emailing (SecureCMail), volume data visualization (SecureCVolume), and social networking (SecureCSocial).
Read More →

Privacy-aware Multimedia Surveillance for Public Safety

Are you concerned about being monitored through electronic surveillance, such as CCTV cameras? Your privacy could be at risk. No worries, we design and develop methods that offer effective automated surveillance, ensuring safety without compromising privacy. Due to the rise in terrorism, electronic surveillance using video cameras, audio sensors, and social media has become widely used to monitor activities and behaviors. Although these surveillance technologies have proven to be highly useful from a security perspective, they have raised significant concerns among people regarding privacy safeguards. Traditional privacy methods focus on explicit identity leaks, such as facial information, but often overlook implicit channels where identity can be inferred through behavior and temporal information. This research project aims to develop effective surveillance methods that can automatically detect suspicious behaviors and actions while preserving privacy by considering both implicit and explicit channels.
Read More →

Characterizing, Verifying and Mitigating Disinformation on Social Media

Do you trust what you see on social media? => Hmmm, yes and no. Would you like to see a credibility score with each piece of media content? Of course! That's our goal—to create fact-checking tools to provide those scores! Falsifying multimedia asset is a type of social hacking designed to change a reader’s point of view, the effect of which may lead them to make misinformed decisions. This project focuses on identifying disinformation content on media-rich social media platforms, such as Facebook and X (formerly Twitter). The goal is to invent novel methods to characterize and verify a given media-rich disinformation content on social media, and to evaluate the social acceptance of such disinformation for developing mitigation strategies.
Read More →

GeoSecure: A Location-Privacy-Aware Framework for Location-Based Services

Do you use GPS-driven LBS like fitness trackers? Your location data can pose serious security and privacy risks. No worries, we have GeoSecure solution for you. Location-based services (LBS) have become an essential part of everyday life. Smartphones, GPS-enabled devices, autonomous vehicles, and related services are widely used. Examples include cab service apps, navigation maps, fitness trackers, and autonomous vehicles. LBS rely on tracking the user's GPS location, which is stored in the cloud. GPS data can reveal sensitive information about users, such as home and work locations, shopping habits, religious and political affiliations, and health conditions. Therefore, it is crucial to protect this information. The core idea of our research is to design algorithms that provide location-based services without revealing users' locations. This approach ensures the safety of users' information while allowing service providers to offer their services.
Read More →