I'm currently a PhD student in computer science (2022 - ?) at the University of Illinois Urbana-Champaign working with Prof. Adam Bates and Prof. Gang Wang. My research interests are in security, privacy, and Internet measurement.
Previously, I was a research associate (2020 - 2022) in Prof. Roya Ensafi's lab working on projects related to security and network censorship measurement. I graduated from the University of Michigan in 2020 with a BSE in computer science. During undergrad, I also did research in computer aided diagnosis (Michigan Medicine: CAD-AI Lab), automotive security (UMTRI: ESG), and analysis of C. elegans (Life Sciences Institute: S. Xu Lab).
I also enjoy running, bass, and chess! I have taught and competitively played chess for many years and am a National Master.
How does Endpoint Detection use the MITRE ATT&CK Framework?
conference USENIX Security Symposium, August 2024
Apurva Virkud Muhammad Adil Inam Andy Riddle Jason Liu Gang Wang Adam Bates
MITRE ATT&CK is an open-source taxonomy of adversary tactics, techniques, and procedures based on real-world observations. Increasingly, organizations leverage ATT&CK technique "coverage" as the basis for evaluating their security posture, while Endpoint Detection and Response (EDR) and Security Indicator and Event Management (SIEM) products integrate ATT&CK into their design as well as marketing. However, the extent to which ATT&CK coverage is suitable to serve as a security metric remains unclear— Does ATT&CK coverage vary meaningfully across different products? Is it possible to achieve total coverage of ATT&CK? Do endpoint products that detect the same attack behaviors even claim to cover the same ATT&CK techniques?
In this work, we attempt to answer these questions by conducting a comprehensive (and, to our knowledge, the first) analysis of endpoint detection products' use of MITRE ATT&CK. We begin by evaluating 3 ATT&CK-annotated detection rulesets from major commercial providers (Carbon Black, Splunk, Elastic) and a crowdsourced ruleset (Sigma) to identify commonalities and underutilized regions of the ATT&CK matrix. We continue by performing a qualitative analysis of unimplemented ATT&CK techniques to determine their feasibility as detection rules. Finally, we perform a consistency analysis of ATT&CK labeling by examining 37 specific threat entities for which at least 2 products include specific detection rules. Combined, our findings highlight the limitations of overdepending on ATT&CK coverage when evaluating security posture; most notably, many techniques are unrealizable as detection rules, and coverage of an ATT&CK technique does not consistently imply coverage of the same real-world threats.
@inproceedings{virkud2024endpoint,
title={How does Endpoint Detection use the MITRE ATT&CK Framework?},
author={Apurva Virkud and Muhammad Adil Inam and Andy Riddle and Jason Liu and Gang Wang and Adam Bates},
booktitle={USENIX Security Symposium},
year={2024}
}
Network Responses to Russia's Invasion of Ukraine in 2022: A Cautionary Tale for Internet Freedom
conference USENIX Security Symposium, August 2023
Reethika Ramesh Ram Sundara Raman Apurva Virkud Alexandra Dirksen Armin Huremagic David Fifield Dirk Rodenberg Rod Hynes Doug Madory Roya Ensafi
PDF Code Best Practical Award at FOCI 2024
Russia's invasion of Ukraine in February 2022 was followed by sanctions and restrictions: by Russia against its citizens, by Russia against the world, and by foreign actors against Russia. Reports suggested a torrent of increased censorship, geoblocking, and network events affecting Internet freedom.
This paper is an investigation into the network changes that occurred in the weeks following this escalation of hostilities. It is the result of a rapid mobilization of researchers and activists, examining the problem from multiple perspectives. We develop GeoInspector, and conduct measurements to identify different types of geoblocking, and synthesize data from nine independent data sources to understand and describe various network changes. Immediately after the invasion, more than 45% of Russian government domains tested blocked access from countries other than Russia and Kazakhstan; conversely, 444 foreign websites, including news and educational domains, geoblocked Russian users. We find significant increases in Russian censorship, especially of news and social media. We find evidence of the use of BGP withdrawals to implement restrictions, and we quantify the use of a new domestic certificate authority. Finally, we analyze data from circumvention tools, and investigate their usage and blocking. We hope that our findings showing the rapidly shifting landscape of Internet splintering serves as a cautionary tale, and encourages research and efforts to protect Internet freedom.
@inproceedings{ramesh2023network,
title={Network Responses to Russia's Invasion of Ukraine in 2022: A Cautionary Tale for Internet Freedom},
author={Ramesh, Reethika and Raman, Ram Sundara and Virkud, Apurva and Dirksen, Alexandra and Huremagic, Armin and Fifield, David and Rodenburg, Dirk and Hynes, Rod and Madory, Doug and Ensafi, Roya},
booktitle={USENIX Security Symposium},
year={2023}
}
Advancing the Art of Censorship Data Analysis
workshop Free and Open Communications on the Internet, February 2023
Ram Sundara Raman Apurva Virkud Sarah Laplante Vinicius Fortuna Roya Ensafi
A decade of research into collecting censorship measurement data has resulted in the introduction and continued operation of several censorship measurement platforms that collect large-scale, longitudinal censorship data. However, collecting data is only part of the process of understanding Internet censorship phenomena; interpreting this data requires a large amount of effort in data analysis, including removing false positives, adding information from external sources, and exploring aggregated data. The lack of a standardized data analysis process that performs such operations leads to incomplete and inaccurate characterizations of censorship. In this work, we present a detailed breakdown of the challenges involved in analyzing censorship measurement data, supported by examples from public censorship datasets such as OONI and Censored Planet. The key challenges identified in this paper encompass finding accurate measurement metadata, and accounting for unexpected causes of network interference other than Internet censorship, and we highlight findings from previous work that suffer from these challenges. To address these challenges, we design and implement an open-source data analysis pipeline for a currently active censorship measurement platform, Censored Planet, and motivate and validate each component of the pipeline by demonstrating censorship case studies that can be accurately characterized using the pipeline. We hope that our paper sheds light on the complexity of censorship data analysis and brings systematization to the process.
@article{raman2023advancing,
title={Advancing the art of censorship data analysis},
author={Raman, Ram Sundara and Virkud, Apurva and Laplante, Sarah and Fortuna, Vinicius and Ensafi, Roya},
booktitle={FOCI},
year={2023}
}
A Large-scale Investigation into Geodifferences in Mobile Apps
conference USENIX Security Symposium, August 2022
Renuka Kumar Apurva Virkud Ram Sundara Raman Atul Prakash Roya Ensafi
Recent studies on the web ecosystem have been raising alarms on the increasing geodifferences in access to Internet content and services due to Internet censorship and geoblocking. However, geodifferences in the mobile app ecosystem have received limited attention, even though apps are central to how mobile users communicate and consume Internet content. We present the first large-scale measurement study of geodifferences in the mobile app ecosystem. We design a semi-automatic, parallel measurement testbed that we use to collect 5,684 popular apps from Google Play in 26 countries. In all, we collected 117,233 apk files and 112,607 privacy policies for those apps. Our results show high amounts of geoblocking with 3,672 apps geoblocked in at least one of our countries. While our data corroborates anecdotal evidence of takedowns due to government requests, unlike common perception, we find that blocking by developers is significantly higher than takedowns in all our countries, and has the most influence on geoblocking in the mobile app ecosystem. We also find instances of developers releasing different app versions to different countries, some with weaker security settings or privacy disclosures that expose users to higher security and privacy risks. We provide recommendations for app market proprietors to address the issues discovered.
@inproceedings{kumar2022geodifferences,
title={A Large-scale Investigation into Geodifferences in Mobile Apps},
author={Renuka Kumar and Apurva Virkud and Ram {Sundara Raman} and Atul Prakash and Roya Ensafi},
booktitle={USENIX Security Symposium},
year={2022}
}
Prediction of Disease Free Survival in Laryngeal and Hypopharyngeal Cancers Using CT Perfusion and Radiomic Features: A Pilot Study
journal Tomography, February 2021
Sean Woolen Apurva Virkud Lubomir Hadjiiski Kenny Cha Heang-Ping Chan Paul Swiecicki Francis Worden Ashok Srinivasan
(1) Purpose: The objective was to evaluate CT perfusion and radiomic features for prediction of one year disease free survival in laryngeal and hypopharyngeal cancer. (2) Method and Materials: This retrospective study included pre and post therapy CT neck studies in 36 patients with laryngeal/hypopharyngeal cancer. Tumor contouring was performed semi-autonomously by the computer and manually by two radiologists. Twenty-six radiomic features including morphological and gray-level features were extracted by an internally developed and validated computer-aided image analysis system. The five perfusion features analyzed included permeability surface area product (PS), blood flow (flow), blood volume (BV), mean transit time (MTT), and time-to-maximum (Tmax). One year persistent/recurrent disease data were obtained following the final treatment of definitive chemoradiation or after total laryngectomy. We performed a two-loop leave-one-out feature selection and linear discriminant analysis classifier with generation of receiver operating characteristic (ROC) curves and confidence intervals (CI). (3) Results: 10 patients (28%) had recurrence/persistent disease at 1 year. For prediction, the change in blood flow demonstrated a training AUC of 0.68 (CI 0.47–0.85) and testing AUC of 0.66 (CI 0.47–0.85). The best features selected were a combination of perfusion and radiomic features including blood flow and computer-estimated percent volume changes-training AUC of 0.68 (CI 0.5–0.85) and testing AUC of 0.69 (CI 0.5–0.85). The laryngoscopic percent change in volume was a poor predictor with a testing AUC of 0.4 (CI 0.16–0.57). (4) Conclusions: A combination of CT perfusion and radiomic features are potential predictors of one-year disease free survival in laryngeal and hypopharyngeal cancer patients.
@Article{tomography7010002,
AUTHOR = {Woolen, Sean and Virkud, Apurva and Hadjiiski, Lubomir and Cha, Kenny and Chan, Heang-Ping and Swiecicki, Paul and Worden, Francis and Srinivasan, Ashok},
TITLE = {Prediction of Disease Free Survival in Laryngeal and Hypopharyngeal Cancers Using CT Perfusion and Radiomic Features: A Pilot Study},
JOURNAL = {Tomography},
VOLUME = {7},
YEAR = {2021},
NUMBER = {1},
PAGES = {10--19},
URL = {https://www.mdpi.com/2379-139X/7/1/2},
PubMedID = {33681460},
ISSN = {2379-139X},
DOI = {10.3390/tomography7010002}
}
Standardization in Quantitative Imaging: A Multicenter Comparison of Radiomic Features from Different Software Packages on Digital Reference Objects and Patient Data Sets
journal Tomography, June 2020
M. McNitt-Gray S. Napel A. Jaggi S.A. Mattonen L. Hadjiiski M. Muzi D. Goldgof Y. Balagurunathan L.A. Pierce P.E. Kinahan E.F. Jones A. Nguyen A. Virkud H.P. Chan N. Emaminejad M. Wahi-Anwar M. Daly M. Abdalah H. Yang L. Lu W. Lv A. Rahmim A. Gastounioti S. Pati S. Bakas D. Kontos B. Zhao J. Kalpathy-Cramer K. Farahani
Radiomic features are being increasingly studied for clinical applications. We aimed to assess the agreement among radiomic features when computed by several groups by using different software packages under very tightly controlled conditions, which included standardized feature definitions and common image data sets. Ten sites (9 from the NCI's Quantitative Imaging Network] positron emission tomography–computed tomography working group plus one site from outside that group) participated in this project. Nine common quantitative imaging features were selected for comparison including features that describe morphology, intensity, shape, and texture. The common image data sets were: three 3D digital reference objects (DROs) and 10 patient image scans from the Lung Image Database Consortium data set using a specific lesion in each scan. Each object (DRO or lesion) was accompanied by an already-defined volume of interest, from which the features were calculated. Feature values for each object (DRO or lesion) were reported. The coefficient of variation (CV), expressed as a percentage, was calculated across software packages for each feature on each object. Thirteen sets of results were obtained for the DROs and patient data sets. Five of the 9 features showed excellent agreement with CV < 1%; 1 feature had moderate agreement (CV < 10%), and 3 features had larger variations (CV ≥ 10%) even after attempts at harmonization of feature calculations. This work highlights the value of feature definition standardization as well as the need to further clarify definitions for some features.
@Article{j.tom.2019.00031,
AUTHOR = {McNitt-Gray, M. and Napel, S. and Jaggi, A. and Mattonen, S.A. and Hadjiiski, L. and Muzi, M. and Goldgof, D. and Balagurunathan, Y. and Pierce, L.A. and Kinahan, P.E. and Jones, E.F. and Nguyen, A. and Virkud, A. and Chan, H.P. and Emaminejad, N. and Wahi-Anwar, M. and Daly, M. and Abdalah, M. and Yang, H. and Lu, L. and Lv, W. and Rahmim, A. and Gastounioti, A. and Pati, S. and Bakas, S. and Kontos, D. and Zhao, B. and Kalpathy-Cramer, J. and Farahani, K.},
TITLE = {Standardization in Quantitative Imaging: A Multicenter Comparison of Radiomic Features from Different Software Packages on Digital Reference Objects and Patient Data Sets},
JOURNAL = {Tomography},
VOLUME = {6},
YEAR = {2020},
NUMBER = {2},
PAGES = {118--128},
URL = {https://www.mdpi.com/2379-139X/6/2/118},
ISSN = {2379-139X},
DOI = {10.18383/j.tom.2019.00031}
}
This is why we don't shout "Bingo": Analyzing ATT&CK Integration in Endpoint Detection Rulesets
talk ATT&CKcon, October 2024
Apurva Virkud
In spite of early and frequent warnings not to shout “Bingo”, ATT&CK technique coverage continues to be touted by security products and is used by organizations and purchasers as the basis for evaluating security posture. In coverage-based assessments, having at least one detection rule for as many techniques as possible is prioritized over the depth or quality of detections. But why is this such a bad idea? To understand the implications of coverage-based assessments, we examine the ATT&CK technique annotations in four major endpoint detection rulesets: Carbon Black, Splunk, Elastic, and Sigma. We find that large regions of the Enterprise ATT&CK Matrix are unimplemented in all rulesets (53 Techniques), in part due to the fact that many techniques are unrealizable as endpoint detection rules. We go on to consider how consistently different rulesets apply technique annotations – even when attempting to detect the same malicious entity, products completely disagree about the appropriate ATT&CK technique annotations 51% of the time, while fully agreeing just 2.7% of the time. Put another way, “covering” one technique may not even suggest protection from the same threat across different products. These findings underscore the dangers of coverage-based ATT&CK assessments.
Censored Planet Webinar
talk Censored Planet Community Webinar, October 2021
Roya Ensafi Ram Sundara Raman Apurva Virkud Elisa Tsai Armin Huremagic
1. Introducing Censored Planet (Roya Ensafi)
2. Censored Planet measurements and Data (Ram Sundara Raman)
3. Censored Planet data analysis pipeline (Ram Sundara Raman)
4. Introducing the Censored Planet dashboard (Apurva Virkud)
5. Censored Planet's Machine Learning approach (Elisa Tsai)
6. Q&A (Armin Huremagic)
Poster: How do Endpoint Detection Products Make Use of MITRE ATT&CK?
poster IEEE Security & Privacy, May 2023
Apurva Virkud Muhammad Adil Inam Andy Riddle Gang Wang Adam Bates
MITRE ATT&CK is an open source taxonomy of adversary tactics, techniques, and procedures based on real-world observations. Increasingly, organizations leverage the ATT&CK as the basis for evaluating their security posture, while Endpoint Detection & Response (EDR) products have integrated ATT&CK into their design and marketing. However, the extent to which this integration has improved real-world security remains unclear -- Does increasing your organization's coverage of ATT&CK improve its security? In this work, we attempt to answer this question by conducting a comparative analysis of EDR products' use of the MITRE ATT&CK knowledge base. We begin by evaluating 3 ATT&CK-annotated EDR detection rule sets from major commercial providers (Carbon Black, Splunk, Elastic) to identify commonalities and underutilized regions of the ATT&CK matrix. We continue by performing a complete qualitative analysis of ATT&CK techniques to determine their feasibility as detection rules. Our initial findings indicate potential limitations of using ATT&CK coverage as an evaluation metric for EDR tools, as we identify several techniques that do not have viable endpoint detection strategies.
Standardization in Quantitative Imaging: A Multi-Center Comparison of Radiomics Feature Values Obtained by Different Software Packages on Digital Reference Objects and Patient Datasets
abstract Radiological Society of North America Meeting (RSNA), December 2019
Michael F. McNitt-Gray Sandy Napel Jayashree Kalpathy-Cramer Akshay Jaggi Nastaran Emaminejad Mark Muzi Dmitry Goldgof Hao Yang Ella F. Jones Muhammad W. Wahi-Anwar Yoganand Balagurunathan Mahmoud Abdalah Binsheng Zhao Lubomir M. Hadjiiski Apurva Virkud Heang-Ping Chan Larry A. Pierce II Keyvan Farahani
PURPOSE Radiomics features are being increasingly proposed for clinical applications such as predicting patient response to therapy or prognosis. The purpose of this work was to investigate the agreement among these features when computed by several groups utilizing different software packages with standardized feature definitions and common image datasets designed to identify possible differences. METHOD AND MATERIALS Nine sites from the NCI's Quantitative Imaging Network PET-CT working group participated in this project. Nine common quantitative imaging features were selected for comparison including features that describe morphology, intensity, shape and texture. A standard lexicon developed by the International Biomarker Standardisation Initiative (IBSI) was adopted as the feature definition reference. The common image data sets were: (a) two sets of 3D Digital Reference Objects (DROs) developed specifically for this effort (200mm and 50 mm diameter objects): a uniform sphere, a sphere with intensity variations, and a complex shape object with uniform intensity; and (b) 10 patient image scans from the LIDC dataset using a specific lesion in each scan. To eliminate variation in feature values caused by segmentation differences, each object (DRO or lesion) was accompanied by a Volume of Interest (VOI), from which the features were calculated. Feature values for each object (DRO or lesion) were reported. The percent coefficient of variation (CV) was calculated across software packages for each feature on each object. RESULTS 10 sets of results were obtained for the DROs. Six of the nine features demonstrated excellent agreement with CV < 1%. Larger variations (CV>= 13%) were observed for the remaining three features. Only 2 sets of results from patient datasets were obtained so far, but similar trends were observed with the exception being kurtosis, which showed higher CV than in the DROs. CONCLUSION By computing common radiomics features on a common set of objects using the same VOIs for each object, we have shown that while several features agree strongly across software packages, others do not. This highlights the value of feature definition standardization as well as the need to further clarify definitions for some features. CLINICAL RELEVANCE/APPLICATION Remaining disagreement in the community as to radiomic feature definitions and implementation details should be resolved before radiomic analysis becomes part of routine practice.
Exploratory Study for Identifying Predictors for Persistent Disease and Tumor Reoccurrence After Treatment of Head and Neck Cancers
abstract Radiological Society of North America Meeting (RSNA), December 2019
Sean Woolen Lubomir Hadjiiski Apurva Virkud Heang-Ping Chan Francis Worden Paul Swiecicki Ashok Srinivasan
PURPOSE Laryngeal cancer is treated with organ preservation therapy or total laryngectomy. However, little is known about which tumors will persist or reoccur after definitive therapy. The objective of our study is to investigate the feasibility of using radiomic and perfusion features as predictors to determine tumors that will persistent or recur at 1 year after treatment. METHOD AND MATERIALS Retrospective analysis of pre and post therapy CT neck scans was performed in 36 patients diagnosed with laryngeal cancer in this IRB approved study. Contouring of the tumors was performed by the computer and tumor features were generated on an internally developed/validated computer-aided detection (CAD) system. Twenty-six radiomic features including morphological and gray-level features were extracted from the computer. Five perfusion features including permeability surface area product (PS), blood flow (flow), blood volume (BV), mean transit time (MTT), and time-to-maximum (Tmax) were extracted from the computer. One year persistent/recurrent disease data were obtained from the time starting after the last treatment of definitive chemoradiation or after total laryngectomy surgery. We performed a two-loop leave one out feature selection using linear discriminant analysis classifier for radiomic and perfusion features. Receiver operator curves and standard deviation were generated. RESULTS All 36 lesions examined were primary laryngeal cancers. Out of the 36 patients, there were 10 patients (28%) that had reoccurrence/persistent disease at 1 year. Percent change in volume was the best predictive feature with an area under the curve (AUC) of 0.63 +/- 0.09. Selecting two features had a testing area under the curve (AUC) of 0.69 +/- 0.09. The best features selected were a combination of radiomic and perfusion features including percent change in volume and percent change in blood perfusion. CONCLUSION Our pilot study indicates that a combination of radiomic and perfusion features are good predictors of tumor reoccurrence/persistent disease after treatment with definitive radiation or total laryngectomy. Our next step is to expand our data set with additional patients. CLINICAL RELEVANCE/APPLICATION Predicting tumors that will reoccur or persist after traditional treatments is an important tool for head and neck cancer management. Good predictors can help providers determine prognosis and patients decide between therapeutic options.
Security Analysis of ADAS and Automated Driving Systems
poster MI ITS Meeting, September 2019
Apurva Virkud Sam Lauzon2019 MI ITS Student Poster Winner
Automotive sensors provide information about the status of vehicle components, environmental data, and driver and passenger activity. We are especially interested in how these sensors function within systems for assisted and autonomous driving. Current driver assistance systems integrate detection, warning, and active response to safety risks such as potential collisions. We created a database to catalogue information about automotive sensors, components, manufacturers, and suppliers to facilitate security analysis of assisted and autonomous driving systems. Over 700 sensors and 300 systems have been entered into the database. This information is being used to predict potential threats and attacks as well as propose defenses against them. Next steps involve using the database to spoof available sensors and carry out these attacks. We have collected information on several camera sensors and ranging sensors central to object and pedestrian detection systems and can determine how these sensors communicate with safety systems through fuzz testing.
A low cost bio-imaging system incorporating machine learning algorithms for automatic analysis of animal behavior
abstract International C. elegans Conference, June 2017
poster UM UROP Symposium, April 2017
Adam Iliff Apurva Virkud Shawn Xu
The overall goal of this project is to develop an imaging system with machine learning capabilities to aid in the study of how genes and neural circuits give rise to animal behavior. Our secondary mission was to create a complete imaging system that was low enough in cost for labs to use many devices in parallel, or for high school and college classrooms to be able to conduct imaginq-based biological experiments. Imaging equipment has increased in quality and decreased in cost to a point in which we were able to build an ultra-low-cost imaging system for recording animal behavior which could accomplish our objectives. Specifically, the system is optimized for recording locomotion of the genetic model organism C. elegans on a near-flat translucent surface. We utilized the free programming language Python with machine learning packages to incorporate automatic analysis of the recorded videos. Several machine learning algorithms for classifying and annotating animal behavior were tested against the performance of human experts, and the top performing algorithms are implemented in the final software. This system has the potential to save researchers time and money and allow them to quickly determine how manipulating genes and neural circuits alters animal behavior. Future plans include adapting the system for other organisms and more complex behaviors.
Censored Planet September 2020 - July 2022
I was a developer for Censored Planet, a remote measurement platform for Internet censorship. I helped maintain the observatory codebase and documentation, and worked on an open-source pipeline (in collaboration with Jigsaw) for large-scale analysis of the data. I also worked on rapid response projects and a study exploring users' mental models, processes, and needs related to censorship data.
Observatory
raw data
readthedocs
github (docs)
Analysis Pipeline
github
User Study
overview
Geodifferences in Mobile Apps September 2019 - June 2021
I wrote measurement code for this project, which we used to collect Google Play metadata, APKs, and privacy policies for 5000+ apps from vantage points in 26 countries. We've made the code (with documentation) and data available for download. See our USENIX '22 paper for more on the results!
Resources
github
data