Industry / Sector / Domain
An AI-driven insights platform that helps monitor and alert users to changes in performance
- Maintaining a consumer-focused outlook and aiding in the building and delivering our data products footprint. Define, design and build secure, reliable, large-scale, high transaction and high-performance application architecture, database and services and manage uptime.
- Attract engineering talent from your network as well as from outside (via recruiters), and provide leadership to the engineering team in India. You will lead the software engineering teams managing, building and scaling the platform and will focus on driving scalability, stability, and reliability. Manage sprint tasks and allocation of work to distributed team.
- You have the ability to easily switch between weighing in on complex engineering details, taking a long term view of the business, growing and developing the next generation of leadership, and overall operating as a large-scale organization leader. It requires an ability to prioritize work for the direct team and partner teams, establish goals, and focus attention on the most important inputs to drive the desired business outcomes.
- Take ownership and spearhead changes to turn our database connectors into the easiest and best in the industry by focusing on performance and optimization
- Drive engineering excellence through architecture design and reviews for our data pipeline service projects and features
- You will work with the Director of Engineering in US, Head of Product and CEO to set a roadmap and deliver short term and long term wins that meet our customers needs and support organisation’s success.
- You will inspire the organization to think differently and innovate. You will provide thought leadership and collaborate with colleagues to develop our intelligence platform as best in class.
- We believe in the philosophy of shipping high quality software, hence, you manage the software that you ship.
- Understand components available on AWS and/or Opensource for
- ETL and connectors to data sources
- Data pipelines, aggregations
- Machine learning
- Should have built large scale distributed systems with an eye towards performance and scale with strong programming skills and experience in one or more general purpose programming languages, including but not limited to: Java, Python, or Go. Automate and scale reliable pipelines using Big Data technologies like Spark/EMR.
- Python/Spark based microservices that run on Pluggable univariate and multivariate open source and custom Anomaly detection models such as ARIMA, Prophet, LSTM and Exponential smoothing. For analysis and identification of hotspots and their contribution to the KPI movement.
- Automate building of data lakes and feature stores over raw Customer datasets using AWS Athena , Spark and PostgreSQL/DRUID
- Scale building connectors to be able to bring data from customer environments
- Perform configuration management using ZooKeeper
- Monitor metrics and availability for microservices using Prometheus/Grafana
Follow us on Linkedin to stay updated on exciting opportunities
Executive Search | Talent Acquisition | Salary Benchmarking | Market Entry Assistance | Interim Management | Talent Intelligence | RPO | Talent Mapping