Hello! I'm Ching Yang (Austin) Huang, a dedicated and passionate Master's student specializing in Robotics at Johns Hopkins University. With a robust foundation in engineering principles and a keen interest in the cutting-edge fields of Automation, Mechatronics, Robotics, Virtual Reality (VR), and Computer Vision, I am on a mission to contribute to the advancement of these technologies.
Category | Skills |
---|---|
Programming Languages | |
ML, VR, & Robotics | |
Development Tools & DevOps | |
Design, Simulation & CAD Tools | |
Version Control Systems | |
Documentation & Project Management |
- Clinched the Best Project Award Finalist presented by the Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC) led by Dr. Russell H. Taylor, father of medical robotics.
- VRPelviSim provides an effective, risk-free virtual reality environment that emulates real operating rooms, transforming the landscape of clinical data collection and surgical training in fluoroscopic surgery. Features include user-friendly control panels, intuitive and precise virtual C-arm operations, scalable servers for widespread access, anonymized patient models derived from real CT data, and realistic simulated digital radiography (DRR) images leveraging DeepDRR.
- This project builds upon the groundbreaking research by Killeen et al., detailed in their latest publication, Stand in surgeon's shoes: virtual reality cross-training to enhance teamwork in surgery
- Link to poster
- Tools utilized: Unity, MRTK, C#, Python, ZeroMQ, Docker, AWS/AZURE, Notion, Jira.
- Affiliated skills: VR, User-centric development, Kinematics, Network and Server Deployment, Project Management, Documentation, and Presentation Skills.
VRPelviSim Team with Mentors VRPelviSim Environment Demo Video Link
- This project showcases the application of complex robotic control algorithms, such as Inverse Kinematics, Transposed-Jacobian, and Resolved-Rate Control, for executing precise pick-and-place tasks.
- By fine-tuning the algorithms' time steps and gains, efficiency and stability were significantly enhanced.
- Challenges like table collisions, singularity issues, and workspace limitations were effectively addressed.
- A highlight of the project was programming the UR5 robot arm to artistically draw the Johns Hopkins University logo, demonstrating both technical mastery and creative application of robotics.
- Link to the report
- Tools utilized: ROS, MATLAB, Ubuntu, Linux.
- Affiliated skills: Kinematics, Control.
JHU logo drawing demo JHU logo drawing trajectory in simulation
- This project advanced the domain of stereotactic navigation systems by developing and implementing mathematical models and algorithms for precise calibration, distortion correction, and tracking.
- Techniques included Cartesian 3D points rotations, frame transformations, 3D point cloud registration, pivot calibration, and advanced interpolation for distortion correction utilizing Bernstein polynomials.
- The project emphasized on correcting distortions within the stereotactic navigation system, enabling accurate determination of instrument tip location relative to CT frames, significantly improving the system's accuracy and reliability.
- Validation methods involved detailed statistical analysis and visualization of results, demonstrating the effectiveness of the developed approaches in handling real-world navigation and tracking challenges.
- Link to the report
- Tools utilized: Python.
- Affiliated skills: Interpolation, Calibration, Distortion Correction, Bezier Geometry, Data Processing, Validation.
- The project leverages HoloLens 2's Research Mode to enhance object awareness beyond the user's field of view. It features ArUco Marker detection, utilizing environmental cameras for real-time object tracking, and employs a radar mini-map UI for intuitive navigation.
- Link to the repo
- Tools utilized: Unity, C#, ArUco.
- Affiliated skills: VR, UI design.
Perception enhancement UI Pereception enhancement demo
- The project explores the enhancement of semantic segmentation for autonomous driving through transfer learning.
- Utilizing the DeepLabv3+ model, pre-trained on Cityscapes and fine-tuned on the JHUStreet dataset, the project aimed to improve segmentation accuracy in diverse environments.
- Significant improvements were observed, particularly in detecting cars and traffic signs in urban settings near the Johns Hopkins Homewood campus.
- Link to the repo
- Tools utilized: Python.
- Affiliated skills: Pytorch, Data Processing.
Segmentation mask Inference examples in epochs
The above examples provided showcase a range of my projects. For those interested in exploring more, such as the Iterative Closest Point Algorithm for 3D Surface Matching
, Sensor-Based Robot Arm Control
, Wireless Climate-controlled Lighting and Blinds System
, Video Content 3D Model Rebuilt by Structure from Motion
, Cybersecurity Strengthening with Deep Learning
, Decision Tree implementation
, Self-implemented Ransac package
, and other innovative works, please feel free to reach out directly at [email protected] for further details and discussions on these and additional projects.
Please note that all projects in this scetion are subject to non-disclosure agreements (NDAs), hence further details or documentation cannot be shared.
- During my internship at Axle Informatics, I engaged in a pioneering project at the National Institute of Health (NIH), aiming at revolutionizing laboratory automation through the development of the Intelligent Inventory Management System (IIMS) and High-Density Storage (HDS).
- I focused on creating an efficient, automated storage solution, HDS, to address the challenges of manual inventory management in laboratory settings.
- My contributions spanned hardware design, including space-efficient, user-friendly, and robust storage solutions, and the integration of automation to enhance operational efficiency and accuracy.
- The project's success was marked by significant advancements in automating pick-and-place tasks and inventory management of laboratory processes, demonstrating the potential to drastically reduce human error and improve drug discovery processes.
- Tools utilized: Creo, Femap, Python, C++, Arduino.
- Affiliated skills: Hardware, Mechatronics, Computer Vision, Control.
- As a Research Assistant at the Laboratory for Computational Sensing and Robotics (LCSR) at Johns Hopkins University, under the guidance of Russell Taylor and cooperation with other professors and researchers, I significantly optimized the malaria vaccine production system's setup time by integrating deep learning techniques in computer vision to the system.
- My work led to a substantial enhancement in ROI detection accuracy, reaching 96% using the YOLOv5 model.
- I implemented Class Activation Mapping with PyTorch by adding a Global Average Pooling layer, enabling us to discern mosquito attributes more effectively, thereby improving vaccine quality with a 95% accuracy rate in predicting vaccine efficacy.
- I streamlined the development process by automating data preparation with Python, centralizing configuration through JSON, enforcing version control with Git, and ensuring comprehensive codebase documentation, thereby boosting overall task efficiency for collaboration.
- Tools utilized: Python, PyTorch, YOLO, Git, GitLab.
- Affiliated skills: Deep Learning, Computer Vision, Automation, Data Processing.
- Clash Bots is a highly acclaimed Chinese TV series aired in 2018 that attracted over one billion views worldwide, featuring international heavyweight robots (110 kg) competing in a variety of challenges, drawing from a unique format that mixed battles, rumbles, and demolition tasks.
- I Led a highly skilled team of four, applying agile project management techniques to streamline the robot's design, construction, and testing stages, ensuring the project was delivered within a tight eight-month timeline.
- I designed impact-resistance mechanism, implemented RF control systems, executed welding and wiring, and assembled electro-mechanical components such as motors, contactors, and torque limiters to enhance combat performance.
- Tools utilized: Creo.
- Affiliated skills: Welding, Electronice, Mechanism Design, Project Management.
Clash Bots demo
What sets me apart is not just my technical expertise but also my passion for learning and innovation. My experiences at NIH, LCSR, alongside numerous projects, have honed my quick learning abilities and fostered a strong capacity for collaboration. These experiences have equipped me with a problem-solving mindset and the ability to work effectively under pressure.
I am keen on contributing to impactful projects, especially in areas like advanced robotics, automation technologies, and computer vision. I'm ready to leverage my skills to make a significant contribution.
For collaborations or opportunities, feel free to reach out via email at [email protected] or my LinkedIn. Let's explore how we can drive innovation forward together.