Difference between revisions of "Chaerin Kong"
From MIPAL
(→Publication) |
(→Work Experience) |
||
(4 intermediate revisions by one user not shown) | |||
Line 15: | Line 15: | ||
== '''Work Experience''' == | == '''Work Experience''' == | ||
− | <li> Research Intern at Language and Vision | + | <li> Research Intern at Twelve Labs. ''2023.06 ~ '' |
+ | <li> Research Intern at NAVER, Language and Vision ''2022.04 ~ 2022.09'' | ||
<li> Research Intern at AIIS, ''2020.07 ~ 2020.08'' | <li> Research Intern at AIIS, ''2020.07 ~ 2020.08'' | ||
<li> Intern at Bemypet, ''2020.01 ~ 2020.02'' | <li> Intern at Bemypet, ''2020.01 ~ 2020.02'' | ||
Line 27: | Line 28: | ||
:<font size = 4>'''International Conferences''' </font><br> | :<font size = 4>'''International Conferences''' </font><br> | ||
<ul> | <ul> | ||
− | <li> Seungwoo Lee, '''Chaerin Kong''', Donghyeon Jeon, Nojun Kwak, [https://arxiv.org/abs/2305.04001 "AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion"], '''CVPR Workshop on AI for Content Creation | + | <li> Seungwoo Lee, '''Chaerin Kong''', Donghyeon Jeon, Nojun Kwak, [https://arxiv.org/abs/2305.04001 "AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion"], '''CVPR 2023 Workshop on AI for Content Creation'''. |
− | <li> '''Chaerin Kong''', Nojun Kwak, [https://reyllama.github.io/MGG "Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance"], '''ICLR Workshop on Multimodal Representation Learning | + | <li> '''Chaerin Kong''', Nojun Kwak, [https://reyllama.github.io/MGG "Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance"], '''ICLR 2023 Workshop on Multimodal Representation Learning''' (Spotlight). |
<li> Jiho Jang*, '''Chaerin Kong*''', Donghyeon Jeon, Seonhoon Kim, Nojun Kwak, [https://reyllama.github.io/OneR/ "Unifying Vision-Language Representation Space with Single-tower Transformer"], '''AAAI 2023''' (Oral). | <li> Jiho Jang*, '''Chaerin Kong*''', Donghyeon Jeon, Seonhoon Kim, Nojun Kwak, [https://reyllama.github.io/OneR/ "Unifying Vision-Language Representation Space with Single-tower Transformer"], '''AAAI 2023''' (Oral). | ||
<li> '''Chaerin Kong''', Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak, [https://reyllama.github.io/Fashion-Diffusion/ "Leveraging Off-the-shelf Diffusion Model for Multi-attribute Fashion Image Manipulation"], '''WACV 2023.''' | <li> '''Chaerin Kong''', Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak, [https://reyllama.github.io/Fashion-Diffusion/ "Leveraging Off-the-shelf Diffusion Model for Multi-attribute Fashion Image Manipulation"], '''WACV 2023.''' |
Latest revision as of 16:34, 2 July 2023
Chaerin Kong (공채린)
|
Research Interest: Generative Models, Vision-Language, Data-efficient Learning
Work Experience
Publication
- International Conferences
- Seungwoo Lee, Chaerin Kong, Donghyeon Jeon, Nojun Kwak, "AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion", CVPR 2023 Workshop on AI for Content Creation.
- Chaerin Kong, Nojun Kwak, "Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance", ICLR 2023 Workshop on Multimodal Representation Learning (Spotlight).
- Jiho Jang*, Chaerin Kong*, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak, "Unifying Vision-Language Representation Space with Single-tower Transformer", AAAI 2023 (Oral).
- Chaerin Kong, Donghyeon Jeon, Ohjoon Kwon, Nojun Kwak, "Leveraging Off-the-shelf Diffusion Model for Multi-attribute Fashion Image Manipulation", WACV 2023.
- Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak, "Self-Distilled Self-Supervised Representation Learning", WACV 2023.
- Yeji Song, Chaerin Kong, Seo Young Lee, Joonseok Lee, Nojun Kwak, "Towards Efficient Neural Scene Graphs by Learning Consistency Fields", BMVC 2022.
- Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak, "Few-shot Image Generation with Mixup-based Distance Learning", ECCV 2022.
- Under Review
- Chaerin Kong, Nojun Kwak, "Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Incremental Few-shot Image Synthesis", June, 2022