Satellite and drone imagery access is on the rise, and traditional image processing methods are struggling to keep up. We’ve never had more data, and yet it’s never harder than ever to gain meaningful insights.
Our scalable AI platform enables custom model training on global features, providing real-time, on-demand geospatial insights with impressive speed and accuracy. The application turns months of manual work into mere minutes, and with much better results. We work with customers from various domains, from intelligence and defence, local and federal governments, to small and large enterprise enterprises, which requires us to have a lot of flexibility on how we deploy and maintain our services.
We kicked off in 2020 and have secured $35 million in series A funding from a lineup of top US and European investors, among which Microsoft M12, Point72 Ventures, Maxar, In-Q-Tel, SAFRAN, and ISAI/Capgemini.
We're searching for a Software Engineer to join our ML Training & Inference team, where you'll build the orchestration layer that powers our geospatial AI platform. You'll work at the intersection of distributed systems and machine learning, enabling customers to train custom detection models and run inference at scale on imagery spanning continents.
The minimum salary is 56.000,--€ gross per year. The effective salary depends on qualification and experience and may be significantly higher!
What you'll do
Build and optimize ML orchestration pipelines that coordinate model training and inference across distributed worker pools
Design resilient, high-throughput services that process terabytes of geospatial imagery through GPU-accelerated inference
Develop the APIs and abstractions that allow customers to chain, filter, and compose AI models for complex detection workflows
Collaborate with ML Researchers to put new models in production
Tackle memory optimization, GPU autoscaling, and resource scheduling challenges unique to large-scale imagery processing
YOUR PROFILE
Strong practical knowledge of Python with experience building production systems
Experience designing and operating distributed systems or data pipelines
Familiarity with async processing patterns, task queues, and worker pool architectures
Solid understanding of PostgreSQL and data modeling
Strong software engineering fundamentals: testing, CI/CD, observability, reliability
You're outcome-oriented and comfortable navigating ambiguity to deliver results
Ideally:
Experience with ML infrastructure, model serving, or training pipelines
Hands-on experience with Kubernetes in production environments
Familiarity with GPU workloads and the unique challenges of ML at scale
Experience with geospatial data formats (GeoTIFF, COG, STAC) or imagery processing
Background deploying systems in regulated or air-gapped environments
Tech Stack
Python, FastAPI
PostgreSQL, Redis
Docker, Kubernetes (EKS, K3S)
AWS (with on-prem and edge deployment targets)
GPU infrastructure for ML inference
Why join us?
Real impact at scale: Your work powers AI inference across imagery of entire countries, supporting defence, intelligence, and humanitarian applications
Diverse deployment challenges: Build systems that run in AWS, on customer infrastructure, or on a laptop in the field—each with unique constraints
Growth trajectory: Join ahead of our Series B as we expand into new markets and scale the platform
Strong technical culture: Work alongside ML Engineers, GIS specialists, and infrastructure experts solving novel problems in geospatial AI
Healthy work-life balance with flexible working arrangements
Competitive compensation with personalized benefits including learning opportunities, mental well-being programs, and healthcare