Difference between revisions of "Seunghyeon Seo"

From MIPAL
Jump to: navigation, search
Line 82: Line 82:
  
 
<li> '''Seunghyeon Seo''', Yeonjin Chang, Jayeon Yoo, Seungwoo Lee, Hojun Lee, Nojun Kwak, "HourglassNeRF: Casting an Hourglass as a Bundle of Rays for Few-shot Neural Rendering".
 
<li> '''Seunghyeon Seo''', Yeonjin Chang, Jayeon Yoo, Seungwoo Lee, Hojun Lee, Nojun Kwak, "HourglassNeRF: Casting an Hourglass as a Bundle of Rays for Few-shot Neural Rendering".
 +
</li>
 +
 +
<li> Donghoon Han*, '''Seunghyeon Seo*''', Eunhwan Park, SeongUk Nam, Nojun Kwak, "Unleash the Potential of CLIP for Video Highlight Detection", * indicates equal contribution.
 
</li>
 
</li>
  

Revision as of 16:02, 18 March 2024

Seunghyeon.JPG Seunghyeon Seo (서승현)

서울대학교 협동과정 인공지능전공 석박사통합과정 (2021.03 ~ )
서울대학교 농경제사회학부 지역정보전공/인문데이터과학(연계전공) 학사 졸업 (2021.02)


Tel: (+82)31-888-9579
e-mail: zzzlssh@snu.ac.kr
Research Page Google Scholar LinkedIn Github

Research Interests

Deep Learning, Computer Vision, Human Pose Estimation, Neural Rendering

Work Experience

  • Jun. 2020 ~ Nov. 2020:
    Research Assistant at ThinkforBL Consulting Group, Korea.
  • Sep. 2019 ~ Feb. 2020:
    Intern at the Committee on World Food Security, Food and Agriculture Organization of the United Nations (FAO), Italy.

Education

  • Mar. 2021 ~ Present:
    M.S./Ph.D. Student, Interdisciplinary Program in Artificial Intelligence, College of Engineering, Seoul National University, Korea.
  • Feb. 2021:
    B.S., Regional Information Major / Data Science for the Humanities (Combined Minor), Seoul National University, Korea.
  • Jan. 2019 ~ Jun. 2019:
    Exchange Student, Economics / International Relations, Sciences Po (Paris Campus), France.

Publications

International Conferences


Under Review
  • Seunghyeon Seo, Yeonjin Chang, Jayeon Yoo, Seungwoo Lee, Hojun Lee, Nojun Kwak, "HourglassNeRF: Casting an Hourglass as a Bundle of Rays for Few-shot Neural Rendering".
  • Donghoon Han*, Seunghyeon Seo*, Eunhwan Park, SeongUk Nam, Nojun Kwak, "Unleash the Potential of CLIP for Video Highlight Detection", * indicates equal contribution.

Projects

  • Artificial intelligence research about cross-modal dialogue modeling for one-on-one multi-modal interactions, May. 2022 ~ Jul. 2023
  • Development of real-time multi-camera object tracking and identification technology, Jun. 2021 ~ Dec. 2021
  • Development of multimodal sensor-based intelligent systems for outdoor surveillance robots, Jan. 2021 ~ Aug. 2021