Difference between revisions of "KiYoon Yoo"
From MIPAL
(→Publications) |
|||
Line 16: | Line 16: | ||
:<font size = 3>''International Conferences'' </font><br> | :<font size = 3>''International Conferences'' </font><br> | ||
<ul> | <ul> | ||
+ | [1] '''KiYoon Yoo''', Nojun Kwak, "Backdoor Attacks in Federated Learning by Poisoned Word Embeddings", Workshop on Federated Learning for Natural Language Processing (FL4NLP) at ACL 2022, May 2022. <br> | ||
[1] '''KiYoon Yoo''', Jangho Kim, Jiho Jang, Nojun Kwak, "Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation", Findings of ACL 2022. <br> | [1] '''KiYoon Yoo''', Jangho Kim, Jiho Jang, Nojun Kwak, "Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation", Findings of ACL 2022. <br> | ||
[2] Jangho Kim, '''KiYoon Yoo''', Nojun Kwak, "Position-based Scaled Gradient for Model Quantization and Sparse Training", Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), Dec. 2020, Online<br> | [2] Jangho Kim, '''KiYoon Yoo''', Nojun Kwak, "Position-based Scaled Gradient for Model Quantization and Sparse Training", Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), Dec. 2020, Online<br> | ||
Line 30: | Line 31: | ||
:<font size = 3>''Open Archive'' </font><br> | :<font size = 3>''Open Archive'' </font><br> | ||
<ul> | <ul> | ||
− | [1] SeongUk Park, '''Kiyoon Yoo''' and Nojun Kwak, "On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective", arXiv, Sep. 2020.<br> | + | [1] '''KiYoon Yoo''' and Nojun Kwak, "Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling", arXiv, April 2022. (Extended) <br> |
− | [ | + | [2] SeongUk Park, '''Kiyoon Yoo''' and Nojun Kwak, "On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective", arXiv, Sep. 2020.<br> |
+ | [3] Sang-ho Lee, '''Kiyoon Yoo''', Nojun Kwak, "Asynchronous Edge Learning using Cloned Knowledge Distillation, arXiv, Oct. 2020.<br> | ||
</ul> | </ul> |
Revision as of 12:14, 1 September 2022
KiYoon Yoo (유기윤)
|
Publications
- International Conferences
-
[1] KiYoon Yoo, Nojun Kwak, "Backdoor Attacks in Federated Learning by Poisoned Word Embeddings", Workshop on Federated Learning for Natural Language Processing (FL4NLP) at ACL 2022, May 2022.
[1] KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak, "Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation", Findings of ACL 2022.
[2] Jangho Kim, KiYoon Yoo, Nojun Kwak, "Position-based Scaled Gradient for Model Quantization and Sparse Training", Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), Dec. 2020, Online
- International Journals
-
[1] Hojun Lee, Donghwan Yun, Jayeon Yoo, Kiyoon Yoo, Yong Chul Kim, Dong Ki Kim, Kook-Hwan Oh, Kwon Wook Joo, Yon Su Kim, Nojun Kwak, Seung Seok Han, "Deep Learning Model for Real-Time Prediction of Intradialytic Hypotension", Clinical Journal of the American Society of Nephrology, Accepted.
- Open Archive
-
[1] KiYoon Yoo and Nojun Kwak, "Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling", arXiv, April 2022. (Extended)
[2] SeongUk Park, Kiyoon Yoo and Nojun Kwak, "On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective", arXiv, Sep. 2020.
[3] Sang-ho Lee, Kiyoon Yoo, Nojun Kwak, "Asynchronous Edge Learning using Cloned Knowledge Distillation, arXiv, Oct. 2020.