Assistant Professor Before joining UCF, I worked as a Senior Research Scientist at Samsung Research AI Center. I obtained my Ph.D. and M.S. degrees from Indiana University Bloomington. I am focused on improving the efficiency, privacy, and security of intelligent systems, with the goal of making them more accessible to the public. Furthermore, I am dedicated to advancing interdisciplinary research in computer vision, natural language processing, and various scientific tasks:
|
Prospective Students: We will be recruiting 2 Ph.D. students and one postdoc. Please email me at qian.lou@ucf.edu with your CV, research experiences/interests, and English Proficiency Scores (TOEFL/IELTS, GRE). For application, please apply through the CS department and include my name as a possible advisor in your application.
Have questions? Check out my answers to the PhD Advisor Guide.09 / 2024 | Encrypted Data Pruning for Confidential Training of Deep Neural Networks is accepted by NeurIPS 2024. |
09 / 2024 | Two papers about Jailbreaking and Fairness Backdoor are accepted by EMNLP 2024. |
09 / 2024 | DataSeal: Ensuring the Verifiability of Private Computation on Encrypted Data is accepted by IEEE S&P 2024. |
08 / 2024 | HEBridge: Connecting Arithmetic and Logic Operations in FV-style HE Schemes is accepted by CCS WAHC 2024. |
07 / 2024 | Trinity: A General Purpose FHE Accelerator is accepted by MICRO 2024. |
07 / 2024 | WBP: Training-time Backdoor Attacks through Weight Bit Poisoning is accepted by ECCV 2024. |
07 / 2024 | SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning is accepted by ECCV 2024. |
06 / 2024 | BoostCom: Towards Efficient Universal Fully Homomorphic Encryption by Boosting the Word-wise Comparisons is accepted by PACT 2024. |
05 / 2024 | Congratulations to Jiaqi Xue and Yancheng Zhang, who started their internships at Samsung Research America. |
05 / 2024 | CR-UTP: Certified Robustness against Universal Text Perturbations is accepted by ACL 2024. |
03 / 2024 | TrojFSP: Trojan Insertion in Few-shot Prompt Tuning is accepted by NAACL 2024 (Oral in the main conference). |
09 / 2023 | TrojLLM: A Black-box Trojan Attack on Large Language Models is accepted by NeurIPS 2023. |
02 / 2023 | TrojViT: Trojan Insertion in Vision Transformers is accepted by CVPR 2023. |
02 / 2023 | Primer: Privacy-preserving Transformer on Encrypted Data is accepted by DAC 2023. |
01 / 2023 | TrojText: Test-time Invisible Textual Trojan Insertion is accepted by ICLR 2023. |
10 / 2022 | Weighted value decomposition on language model is accepted by EMNLP 2022. |
03 / 2022 | LITE-MDETR is accepted by CVPR 2022. |
02 / 2022 | MATCHA is accepted by DAC 2022. |
01 / 2022 | Language Model Compression is accepted by ICLR 2022. |
01 / 2022 | DictFormer is accepted by ICLR 2022. |
08 / 2021 | CryptoGRU is accepted by EMNLP 2021. |
05 / 2021 | HEMET is accepted by ICML 2021. |
05 / 2021 | Qian received a Luddy Outstanding Research Award. |
09 / 2020 | Three papers were accepted by NeurIPS 2020. |
NSF: Panelist
ISCA: Program Committee
DAC: Program Committee
ISCA 2023: Local Area Chair
Tiny and Fair ML Design Contest Organizer at ESWEEK 2023
IEEE Transactions on Information Forensics and Security: Reviewer
AI for Content Creation (AI4CC) Workshop @ CVPR 2024: Area Chair
AAAI: Senior Program Committee
NeurIPS: Reviewer
ICML:Reviewer
ACL/NAACL:Reviewer
ICLR: Reviewer
CVPR: Reviewer
ECCV: Reviewer
2022 Fall: CDA 5106 Advanced Computer Architecture
2023 Spring: CAP 6614 Current Topics in Machine Learning
2023 Fall: CDA 5106 Advanced Computer Architecture
2024 Spring: CAP 6614 Current Topics in Machine Learning
[S&P 25] DataSeal: Ensuring the Verifiability of Private Computation on Encrypted Data |
[NeurIPS 24] HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning |
[EMNLP 24] BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers |
[EMNLP 24] Jailbreaking LLMs with Arabic Transliteration and Arabizi |
[MICRO 24] Trinity: A General Purpose FHE Accelerator |
[CCS-WAHC 24] HEBridge: Connecting Arithmetic and Logic Operations in FV-style HE Schemes |
[CCS-LAMPS 24] TrojFair: Trojan Fairness Attacks |
[CCS-LAMPS 24] CryptoTrain: Fast Secure Training on Encrypted Data |
[ACL 24] CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models |
[ISLPED 24] OFHE: An Electro-Optical Accelerator for Discretized TFHE |
[PACT 24] BoostCom: Towards Efficient Universal Fully Homomorphic Encryption by Boosting the Word-wise Comparisons |
[ECCV 24] WBP: Training-time Backdoor Attacks through Weight Bit Poisoning. |
[ECCV 24] SSL-Cleanse: Trojan detection and mitigation in self-supervised learning. |
[Mathematics 24] Unveiling Fall Triggers in Older Adults: A Machine Learning Graphical Model Analysis |
[NAACL 24] TrojFSP: Trojan Insertion in Few-shot Prompt Tuning |
[NeurIPS 23] Trojllm: A black-box trojan prompt attack on large language models |
TrojViT: Trojan Insertion in Vision Transformers |
[ECAI 23] TrojBits: A Hardware Aware Inference-Time Attack on Transformer-Based Language Models |
[ISQED 23] PriML: An Electro-Optical Accelerator for Private Machine Learning on Encrypted Data |
[ADMA 23] Cryptography-Inspired Federated Learning for Generative Adversarial Networks and Meta Learning |
[ICLR 23] TrojText: Invisible Test-time Trojan Insertion |
[DAC 23] Primer: Privacy-Preserving Transformer on Encrypted Data |
[NANOARCH 22] CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic Encryption |
[CVPR 22] Lite-MDETR: A Lightweight Multi-Modal Detector |
[ICLR 22] DictFormer: Tiny Transformer with Shared Dictionary |
[ICLR 22] Language model compression with weighted low-rank factorization |
[EMNLP 22] Numerical Optimizations for Weighted Low-rank Estimation on Language Model |
[ICLR 21] SAFENet: A Secure, Accurate and Fast Neural Network Inference |
[IJCAI 21] Automatic Mixed-Precision Quantization Search of BERT |
[DAC 22] MATCHA: A Fast and Energy-Efficient Accelerator for Fully Homomorphic Encryption over the Torus |
[DATE, 22] coxHE: A software-hardware co-design framework for FPGA acceleration of homomorphic computation |
[ICML 21] HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture |
[EMNLP 21] CryptoGRU: Low Latency Privacy-Preserving Text Analysis With GRU |
[NeurIPS 20] AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference |
[NeurIPS 20] Falcon: Fast Spectral Inference on Encrypted Data |
[NeurIPS 20] Glyph: Fast and accurately training deep neural networks on encrypted data |
[PACT 20] Helix: Algorithm/Architecture Co-design for Accelerating Nanopore Genome Base-calling |
[ICLR 20] AutoQ: Automated Kernel-Wise Neural Network Quantization |
[DATE 20] LightBulb: A Photonic-Nonvolatile-Memory-based Accelerator for Binarized Convolutional Neural Networks |
[ASP-DAC, 20] MindReading: An Ultra-Low-Power Photonic Accelerator for EEG-based Human Intention Recognition |
[DATE 19] Holylight: A nanophotonic accelerator for deep learning in data centers |
[NeurIPS, 19] SHE: A Fast and Accurate Deep Neural Network for Encrypted Data |
[ICCAD 18] 3dict: a reliable and qos capable mobile process-in-memory architecture for lookup-based cnns in 3d xpoint rerams |
[CAL 18] BRAWL: A Spintronics-Based Portable Basecalling-in-Memory Architecture for Nanopore Genome Sequencing |
[NVMSA 17] Runtime and reconfiguration dual-aware placement for SRAM-NVM hybrid FPGAs |
[US20240080423A1] Fusion techniques for combining most significant bits and least significant bits of image data in image processing or other applications |
[US20230177338A1] Small and fast transformer model for multi-modal or other tasks |
[US20230104491A1] Small and fast transformer with shared dictionary |
[US20230106213A1] Machine learning model compression using weighted low-rank factorization |
[US20220121947A1] Method and system for secure, accurate and fast neural network inference |