14 Years at Ubiquiti: A Legacy of Scaling

At Ubiquiti, my journey has been about more than technical delivery; it’s about building the human and technical infrastructure that powers a global networking leader.

πŸš€ Organizational Scaling (0 to 1)

I have built and scaled multidisciplinary organizations from scratch to mature, high-performing entities:

  • BSP Team: Scaled from 0 to 26+ members.
  • Cloud Services: Built the Taipei cloud organization from 0 to 30 engineers.
  • Mobile/Apps: Established a unified team of 20+ developers.
  • AI Core Development Team: Built from 0 to 15, spanning data engineering, model training, AIOps, AIQA, and edge device delivery β€” forming a complete AI closed-loop force.
  • Factory Automation Software Team: Built from 0 to 10, focused on intelligent manufacturing software that bridges hardware mass production workflows with digital management, improving yield rates and production efficiency.
  • Total Management: Currently overseeing 250+ professionals across Firmware, Cloud, AI, and QA.

πŸ•ΈοΈ Fabric/Matrix: Cross-Functional Synergy

I believe the true strength of an organization lies not in siloed excellence, but in the multiplier effect of cross-team collaboration. I deliberately cultivate a Fabric-style matrix culture β€” where AI, Firmware, Cloud, QA, and Mobile App teams interweave like the warp and weft of fabric, reinforcing each other precisely when it matters most, and actively preventing the Silo Effect:

  • Cross-team knowledge flow: Regular cross-functional Tech Talks and Design Reviews ensure insights flow horizontally across disciplines.
  • Shared foundational capabilities: AI inference infrastructure, cloud data platforms, and QA automation frameworks are offered as shared services to all teams, eliminating redundant work.
  • Proactively eliminating friction: Regularly mapping cross-team dependencies and integration touchpoints to identify collaboration opportunities and maximize overall effectiveness.

🎯 Coach Leadership: Empower, Don’t Command

As an ICF Associate Certified Coach (ACC), I don’t aim to be the smartest person in the room β€” I aim to make every team member and every team the autonomous owner of their own domain:

  • Trust and delegation: Giving engineers true ownership and accountability over their work without micromanagement.
  • Coaching conversations: Asking questions instead of giving answers, guiding team members to discover solutions themselves and build decision-making confidence.
  • Minimizing over-intervention: Resisting the urge to step in unless a critical risk demands it β€” allowing teams to learn through execution and grow through challenge.
  • The highest achievement is enabling others: When every team can independently carry a critical role in the organization β€” even beyond my own capabilities β€” that is when leadership has fulfilled its purpose.

☁️ Cloud Platform Scale & Growth

Transformed and managed the core Central Cloud Services powering the entire hardware ecosystem globally:

  • SSO Identity Platform: Scaled the identity platform to 10 Million registrations (9 Million verified users). Sustained a remarkable 3,400Γ— growth factor since launch, achieving a recent processing peak of ~157k new users/month with a 19% YoY 2025 growth engine.
  • IoT Ecosystem Scale (NCA): Expanded global console orchestration to manage 3.5 Million active online consoles (+150% over 3 years) along with 30 Million total managed application devices. Mastercrafted the architectural transition through an explosive 85Γ— early adoption multiplier to sustained 30%+ YoY market maturity.
  • High Availability (SRE/SLA): Built robust SRE practices and achieved consistent 99.5% monthly availability across all major cloud services in the past 2 years. Dramatically bound global downtime (limiting critical incident durations to less than 10-12 aggregate hours globally per year out of 8,760 hours).
  • Critical Service Expansions: Oversaw the mass-scaling of the Firmware Update Service and active subscription platforms.

πŸ€– AI Core Development Division: End-to-End Closed-Loop Architecture

We have established a complete AI Closed-Loop Architecture β€” a self-reinforcing, continuously evolving system that spans from raw data ingestion all the way to edge device delivery. Each stage is tightly integrated, enabling rapid AI iteration and real-world deployment at scale.

Raw Data β†’ Data Labeling β†’ Model Training β†’ AIOps β†’ AIQA β†’ Edge Device Delivery
  ↑                                                                       |
  └─────────────────────── Feedback Loop β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“Š Data (Data Infrastructure)

  • Built diverse data ingestion pipelines spanning camera streams, IoT sensor events, and user behavior logs.
  • Designed a data warehouse architecture with strong emphasis on data quality, traceability, and compliance (GDPR/privacy standards).
  • Introduced data versioning to ensure every model training run maps to a complete and reproducible dataset snapshot.

🏷️ Data Labeling

  • Built an in-house labeling platform and workflow covering image classification, object detection (bounding boxes), and semantic segmentation.
  • Applied Active Learning strategies to prioritize labeling of the most model-impactful hard samples, reducing labeling costs by 40%+.
  • Established QA Audit processes to ensure labeling consistency and accuracy are maintained to a high standard.

🧠 AI Model Training

  • Led the full training lifecycle from research prototype to production-ready models, spanning Computer Vision (detection, tracking, recognition) and multi-modal LLM integration.
  • Built MLOps training infrastructure supporting distributed training, hyperparameter optimization (HPO), and experiment tracking (MLflow/W&B).
  • Achieved model compression optimized for edge hardware via Quantization, Pruning, and Knowledge Distillation β€” significantly reducing inference latency and power consumption without sacrificing accuracy.

βš™οΈ AIOps (AI Operations)

  • Built a full model lifecycle management platform covering model versioning, A/B deployment, canary releases, and automated rollback.
  • Implemented Model Drift Detection to monitor live inference quality and trigger automated retraining pipelines.
  • Designed a dual-layer cloud + edge AI inference architecture that dynamically routes workloads based on latency and accuracy requirements.

βœ… AIQA (AI Quality Assurance)

  • Established a systematic AI evaluation framework covering functional metrics (mAP, Recall, Precision), edge case testing, and adversarial stress tests.
  • Defined AI Product Launch Criteria to ensure every AI-enabled product passes rigorous quality gates before shipment.
  • Integrated a CI/CD for AI pipeline, embedding model evaluation into the development loop to accelerate iteration velocity.

πŸ“¦ Edge Device Delivery (Camera & AI Key)

  • Led end-to-end development of the UniFi Camera Gen2/3/4/5/6 series, embedding AI capabilities (face recognition, vehicle detection, anomaly analysis) directly on-device for real-time, low-latency inference.
  • Drove the development and mass production delivery of the AI Key (dedicated AI accelerator hardware), enabling an AI compute upgrade path for existing devices without hardware replacement.
  • Established an AI model OTA update mechanism allowing model-only upgrades independent of system firmware, shortening AI feature iteration cycles to days.
  • Managed the complete production lifecycle from RD β†’ DVT β†’ PVT β†’ MP, ensuring shipping quality targets and delivery commitments for AI hardware are met.

🧩 LLM Internal Services: AI-Powered Intelligent Workflows

Beyond edge AI product R&D and delivery, our AI team simultaneously leads multiple LLM core services targeting both internal operations and external user experiences β€” translating the power of language models into real, deployable business value.

πŸ“£ Social Hearing

  • Built an automated social media data crawling and analysis pipeline covering major platforms (Reddit, Twitter/X, community forums) for user reviews and feedback.
  • Applied LLM-based semantic understanding and sentiment analysis to precisely extract real user pain points regarding features, quality, and service, automatically categorizing them into Actionable Insights.
  • Fed analysis results directly into Product Team iteration prioritization decisions, shortening the cycle from “user voice” to “product improvement.”

πŸ’₯ Device Crash Analysis

  • Built an LLM-powered crash log semantic analysis system that automatically parses Crash Reports, Stack Traces, and system logs to identify Root Cause patterns.
  • Automatically clusters and aggregates crash events into a ranked hotspot list, enabling Product Teams and firmware engineers to precisely target high-frequency issues and significantly improve bug fix efficiency.
  • Integrated with the CI/CD pipeline to automatically monitor crash trends after new releases, enabling early warning detection.

πŸ“‹ Product Support File Analysis

  • Built an automated Support File parsing engine that uses LLM to analyze device support files (logs, config snapshots, status reports) returned by users, extracting key anomaly signals.
  • Automatically generates issue summaries and resolution recommendations, helping the Technical Support team quickly pinpoint problems and reducing manual analysis time by 60%+.
  • Accumulated analysis results build a knowledge base that empowers the Product Team to precisely understand real user pain points, driving continuous optimization of product design and firmware.

🌐 Company Website & App LLM Backend (Intelligent Support Bot)

  • Led the development of the LLM backend services for the company website and mobile apps, providing a one-stop intelligent conversational interface covering product information queries, feature usage guidance, and troubleshooting diagnostics.
  • Adopted a RAG (Retrieval-Augmented Generation) architecture, integrating product documentation, knowledge bases, and FAQs to ensure Chat Bot response accuracy and timeliness.
  • Designed multi-turn conversation management to support complex scenario context tracking, enabling users to complete configuration, troubleshooting, and service requests without human agent intervention.
  • Continuously optimizes models and the knowledge base through user conversation data, forming a positive cycle of “user interaction β†’ knowledge refinement β†’ service quality improvement.”