Follow
Neil Zhenqiang Gong
Neil Zhenqiang Gong
Associate Professor, Duke University
Verified email at duke.edu - Homepage
Title
Cited by
Cited by
Year
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
M Fang, X Cao, J Jia, NZ Gong
USENIX Security Symposium, 2020
11702020
Stealing Hyperparameters in Machine Learning
B Wang, NZ Gong
IEEE Symposium on Security and Privacy, 2018
6192018
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
X Cao, M Fang, J Liu, NZ Gong
ISOC Network and Distributed System Security Symposium (NDSS), 2021
5802021
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
J Jia, A Salem, M Backes, Y Zhang, NZ Gong
ACM Conference on Computer and Communications Security (CCS), 2019
4122019
On the feasibility of internet-scale author identification
A Narayanan, H Paskov, NZ Gong, J Bethencourt, E Stefanov, ECR Shin, ...
IEEE Symposium on Security and Privacy, 2012
4012012
Joint link prediction and attribute inference using a social-attribute network
NZ Gong, A Talwalkar, L Mackey, L Huang, ECR Shin, E Stefanov, ER Shi, ...
ACM Transactions on Intelligent Systems and Technology (TIST) 5 (2), 27, 2014
327*2014
Evolution of Social-Attribute Networks: Measurements, Modeling, and Implications using Google+
NZ Gong, W Xu, L Huang, P Mittal, E Stefanov, V Sekar, D Song
ACM Internet Measurement Conference (IMC), 2012
2712012
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
X Cao, NZ Gong
Annual Computer Security Applications Conference (ACSAC), 2017
2532017
Poisoning Attacks to Graph-Based Recommender Systems
M Fang, G Yang, NZ Gong, J Liu
Annual Computer Security Applications Conference (ACSAC), 2018
2372018
SybilBelief: A Semi-supervised Learning Approach for Structure-based Sybil Detection
NZ Gong, M Frank, P Mittal
IEEE Transactions on Information Forensics and Security 9 (6), 2014
2262014
Backdoor Attacks to Graph Neural Networks
Z Zhang, J Jia, B Wang, NZ Gong
ACM Symposium on Access Control Models and Technologies (SACMAT), 2021
2142021
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
J Jia, NZ Gong
USENIX Security Symposium, 2018
2002018
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
K Zhu, J Wang, J Zhou, Z Wang, H Chen, Y Wang, L Yang, W Ye, ...
arXiv preprint arXiv:2306.04528, 2023
1942023
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
Z Zhang, X Cao, J Jia, NZ Gong
ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022
1902022
FLCert: Provably Secure Federated Learning against Poisoning Attacks
X Cao, Z Zhang, J Jia, NZ Gong
IEEE Transactions on Information Forensics and Security, 2022
177*2022
You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors.
NZ Gong, B Liu
USENIX Security Symposium, 2016
1742016
TrustLLM: Trustworthiness in Large Language Models
L Sun, Y Huang, H Wang, S Wu, Q Zhang, C Gao, Y Huang, W Lyu, ...
International Conference on Machine Learning (ICML), 2024
167*2024
Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning
J Jia, Y Liu, NZ Gong
IEEE Symposium on Security and Privacy, 2022
1672022
Influence function based data poisoning attacks to top-n recommender systems
M Fang, NZ Gong, J Liu
Proceedings of The Web Conference, 2020
1662020
Stealing Links from Graph Neural Networks
X He, J Jia, M Backes, NZ Gong, Y Zhang
USENIX Security Symposium, 2021
1632021
The system can't perform the operation now. Try again later.
Articles 1–20