Role Overview: We’re seeking an experienced ML Engineer to design, train, and productionize machine learning models at scale. You’ll own the full lifecycle, from data pipelines and feature stores to model deployment and monitoring.
Key Responsibilities:
Build robust data/feature pipelines (batch and streaming) and automate training with CI/CD for ML.
Train, evaluate, and optimize models (classical ML and deep learning) with an eye on latency, throughput and cost.
Leverage GPUs efficiently; profile bottlenecks at data-loader, model, and inference layers. Partner with infra to scale serving (GPU/CPU autoscaling, caching, quantization, distillation).
Implement MLOps best practices: experiment tracking, model registry, canary/AB rollouts, model monitoring and drift detection.
4+ years of experience in applied ML/ML Ops in production.
Strong Python; experience with PyTorch or TensorFlow; solid grasp of statistics and evaluation.
Hands-on with ML pipelines (Airflow/Kedro), experiment tracking (MLflow/Weights and Biases) and model serving (Triton, TorchServe, FastAPI). Familiarity with feature stores (Feast/Tecton) and streaming (Kafka/Kinesis).
English: B2-C1.
Ready to build with us? See our open roles below.
We're looking for engineers who care about clean code, shared knowledge, and building things that last. To apply for the position, please send your resume to the email address.