Principal Data Engineer, SaaS Platform

Innovyze Irvine United States of America Development
Warning! Vacancy expired

Company Description

Innovyze is a leading global provider of smart water infrastructure modelling and simulation software solutions for government agencies, engineering consultants, municipalities and utilities, a $1B+ annual market opportunity in the US alone.

Our clients include majority of the largest UK, Australian, East Asian and North American cities, foremost utilities on all five continents, and ENR top-rated design firms. With unparalleled expertise and offices in North America, Europe, and Asia Pacific, the Innovyze connected portfolio of best-in- class product lines empower thousands of engineers to competitively plan, manage, design, protect, operate and sustain highly efficient and reliable infrastructure systems, and provides an enduring platform for customer success.

Opportunity for Impact

Innovyze operates in the water infrastructure software market which is experiencing structural growth, driven by multiple factors, including: emerging economies building water infrastructure to accommodate rapid urbanization; developed countries requiring increasingly sophisticated management of aging water resources; increasing levels of storms and floods as a result of climate change; and the need to manage the cost of water infrastructure ownership as total water infrastructure assets continue to grow.

The Principal Data Engineer will participate in Innovyze’s cutting edge SaaS platform build out.

This is a key position within our SaaS development organization which is set to handle massive data scale and ground-breaking machine learning solutions.

Position

  • Designs and builds data provisioning workflows/pipelines, physical data schemas, extracts, data transformations, and data integrations and/or designs using ETL and API microservices.

  • Analyze, develop and execute data integration solutions, in order to manage the information lifecycle needs of an organization.

  • Builds data architecture and applications that enable reporting, analytics, data science, and data management and improve accessibility, efficiency, governance, processing, and quality of data.

  • Improve speed to market by focusing on current data needs as well as building out the long-term strategic data solutions using AWS, Snowflake, Mongo, SQL, as well as other modern data technologies

  • Design and develop programs and tools to support ingestion, curation and provisioning of complex enterprise data to achieve analytics, reporting, and data science.

  • Provide successful deployment and provisioning of data solutions to production or other required environments.

  • Analyze complex technical problems and is expected to recommend process improvements that address complex technology gaps within a single business process and improve data reliability, quality, and efficiency.

  • Work in a dynamic and exciting agile environment with Scrum Masters, Product Owners, and team members to develop creative data-driven solutions with our ETL pipeline that meet business and technical initiatives.

  • Actively participates in and often leads peer development and code reviews within each agile sprint, with focus on test driven development and Continuous Integration and Continuous Development (CICD).

Requirements

  • Bachelors or master’s degree in Computer Science, Engineering, Statistics, Information Systems or in other quantitative fields.

  • 8+ years of professional experience in Data Platform Administration/Engineering, or related.

  • Highly proficient in data engineering languages and tools, and strong proficiency in general programming languages and frameworks; ability to develop on multiple platforms.

  • Extensive understanding of agile data engineering concepts and processes, such as CICD, pipelines, and iterative development and deployments

  • Demonstrated experience delivering data solutions via agile methodologies on AWS, S3, Athena, Snowflake, Mongo

  • Experience in migrating ETL processes (not just data) from relational warehouse Databases to AWS based solutions.

  • Experience in building & utilizing tools and frameworks within the Big Data ecosystem including Kafka, Spark, and NoSQL.

  • Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio/Python/Spark/Snowflake/No SQL).

  • Requires critical thinking, data analysis, and data modeling experience (Data Vault experience a plus)

  • Must be proficient in Python, JavaScript functions and SQL.

  • Experience with ETL and knowledge of variety of data platforms

  • Demonstrates leadership and active pursuit of optimizing data, CI/CD process and tools, testing frameworks & practices

  • Must be proactive and self-driven, demonstrated initiative and be a logical thinker.

  • Strong leadership, communication, collaboration skills with a track record of taking solution ownership

Other information

As part of GDPR compliance procedures, we have posted our Recruiting Privacy Notice on our website. Please also note that the advertised position is an opportunity with Autodesk, Inc. (https://www.autodesk.com/), as Autodesk recently acquired Innovyze. Processing of your personal information as part of the job application process, and as part of Autodesk employment should a candidate be hired, will be handled by Autodesk pursuant to Autodesk’s Candidate Privacy Statement, available at https://damassets.autodesk.net/content/dam/autodesk/www/content/careers/autodesk_candidate_privacy_statement.pdf.