About Us
Position Summary
What You’ll Do
Required Skills
Unleash live is one of the most exciting deep tech companies to emerge from Australia. We’re scaling across the globe—with teams in Sydney, the US, and Europe—bringing cutting-edge computer vision to industries where real-time decisions matter most.
Our platform turns live video and sensor feeds into data and actionable insights that help enterprises and governments monitor, respond, and optimise—from bushfire detection and energy grid inspections to transport safety and infrastructure resilience.
We work at the intersection of AI, autonomy, and live data—helping machines not just capture the world, but understand it. Whether it’s industrial-scale deployments or creative edge applications, the power and potential of computer vision is only just being tapped. And we’re leading the way.
Inside Unleash live, we’re a team of thinkers, tinkerers, and builders. We’re proudly geeky, fast-moving, and obsessed with solving hard problems. We work hard, have fun doing it, and genuinely care about what we build and who builds it. Our team is culturally diverse, distributed across time zones, and brings together a wide mix of backgrounds, ages, and experiences.
What unites us is curiosity, drive, and a shared belief that good technology should be both intelligent and impactful.
If you’re looking to join a team where you can grow fast, do work that matters, and collaborate with sharp, committed people who genuinely enjoy what they do, we’d love to meet you.
We design for a world where machines see and think.
Our platform captures reality through live video, processes it using machine learning, and turns it into actionable insights that improve safety, efficiency, and resilience. But making this possible at scale requires robust, intelligent AI pipelines, and that’s where you come in.
We are seeking a skilled and versatile AI Data Scientist / Engineer with strong expertise in the AWS ecosystem to help us build and optimise end-to-end AI pipelines for model retraining, benchmarking, and deployment.
This hybrid role will support the delivery of real-time, high-performance computer vision applications by combining applied data science with production-grade data engineering. You will work across the AI lifecycle, from dataset management and annotation workflows to model performance analytics, enabling scalable and insightful AI delivery across critical infrastructure sectors such as energy, transport, and resources.
Education level, knowledge and work/industry experience
Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field.
Proven experience in a hybrid Data Science / Data Engineering or Applied ML Analytics role.
Ability to work collaboratively in a global team, with flexibility for time zones and occasional travel.
Technical skills
Strong knowledge of AWS services (Redshift, DynamoDB, SageMaker) and comfortable working with cloud-based resources.
Proficient in creating clear, concise visualisations using tools like QuickSight, Grafana, and other VI tools; experienced in delivering end-to-end data analytics solutions.
Skilled in both SQL and NoSQL databases; comfortable with complex and advanced SQL queries.
Experienced in building data transformation pipelines and performing feature engineering on unstructured/semi-structured data.
Familiar with messaging protocols like MQTT and general system integration considerations.
Proven track record working with large datasets, ML inference workflows, and performance analysis.
Knowledge of API development and integration with third-party platforms.
Familiarity with tools like Airflow, dbt, Kafka, or equivalent orchestration and ETL frameworks.
Ability to design and consume REST APIs for data services.
Familiarity with CV model evaluation metrics, inference pipelines, and retraining triggers.
Design and maintain benchmarking frameworks to evaluate AI model inference (e.g. mAP, false positive rate).
Analyse model outputs to surface patterns, anomalies, and operational insights for internal and external stakeholders.
Develop A/B testing pipelines and automated performance reporting dashboards.
Contribute to field validation analytics and performance attribution studies.
Build and manage automated pipelines for data ingestion, processing, and model retraining.
Apply AWS-native services (e.g. S3, SageMaker, Lambda, CloudWatch, Athena) to scale and monitor AI workflows.
Enforce data schema consistency and versioning across datasets and pipeline stages.
Implement APIs and visual dashboards to enhance AI performance and provide data insights.
Ensure high data quality, reproducibility, and pipeline reliability.
Working environment
Hybrid workflow
Fast-paced, dynamic startup environment
Collaborative and supportive global team culture
Opportunity to work on cutting-edge AI technology
Exposure to diverse industries and use cases
Apply Now
Please enter your details and attach your resume to the form below to apply now.