¿ì¸®´Â º£Å×¶û ÇìµåÇåÅ͵éÀÔ´Ï´Ù
ä¿ëÁ¤º¸
ä¿ëÆ÷Áö¼Ç : ¸Ó½Å·¯´× ¿£Áö´Ï¾î_4/18 ¸¶°¨
2026-04-08 Á¶È¸¼ö : 9[ä¿ëÆ÷Áö¼Ç]Realtime Perception/Streaming Engineer (¸Ó½Å·¯´×)
* °æ·Â 3³â ÀÌ»ó
[´ã´ç¾÷¹«]
- ¸ÖƼ Ä«¸Þ¶ó ¿µ»ó ½ºÆ®¸®¹Ö ½Ã½ºÅÛ ±¸Ãà (GStreamer/DeepStream ±â¹Ý pipeline ¼³°è ¹× °³¹ß, Multi-camera sync ¹× timestamp Á¤ÇÕ)
- ½Ç½Ã°£ AI inference ÆÄÀÌÇÁ¶óÀÎ ±¸Ãà (2D/3D ºñÀü ¸ðµ¨ ¼ºù - Detection, Segmentation, Depth µî, Multimodal ¸ðµ¨ inference ½Ã½ºÅÛ ¼³°è)
- Edge AI ½Ã½ºÅÛ °³¹ß - Jetson Orin µî¿¡¼ GPU ÃÖÀûÈ (TensorRT, CUDA), Edge ¡ê Cloud inference ±¸Á¶ ¼³°è
- ¿µ»ó Áö¿¬(latency) ÃÖÀûÈ - End-to-end latency profiling ¹× º´¸ñ Á¦°Å, zero-copy, batching, pipeline º´·ÄÈ
[ÀÚ°Ý¿ä°Ç]
- ¿µ»ó ½ºÆ®¸®¹Ö ¹× ½Ç½Ã°£ ½Ã½ºÅÛ °³¹ß °æÇè
- GStreamer ¶Ç´Â DeepStream ±â¹Ý pipeline °³¹ß, Multi-camera streaming ¹× synchronization, Video encoding/decoding (RTSP, RTP µî) Áß Çϳª ÀÌ»ó ½Ç¹« °æÇè
- AI ¸ðµ¨ ¼ºù °æÇè (PyTorch / TensorRT / ONNX Runtime ±â¹Ý inference, ¸ðµ¨ latency ÃÖÀûÈ)
- GPU / memory / IO º´¸ñ ºÐ¼® ¹× °³¼±, real-time processing °æÇè
- Python ¶Ç´Â C++ ±â¹Ý °³¹ß °æÇè
[¿ì´ë»çÇ×]
- NVIDIA Jetson (Orin, Xavier µî) °³¹ß °æÇè
- DeepStream + TensorRT ÃÖÀûÈ °æÇè
- ROS2 ±â¹Ý perception pipeline ±¸Ãà °æÇè
- Multi-modal ¸ðµ¨ (VLM µî) ¼ºù °æÇè
- Distributed inference / edge-cloud hybrid ±¸Á¶ °æÇè
- °íFPS / ÀúÁö¿¬ ¿µ»ó ó¸® ½Ã½ºÅÛ ±¸Ãà °æÇè
[±Ù¹«Áö] ¼¿ï ¼Ãʱ¸ (º»»ç)
[ó¿ì] ÇùÀÇ ÈÄ °áÁ¤ (Çö ¿¬ºÀ°¨¾È »óÇâ °ËÅä)