Analytics Engineer (Remote, UK)
Full time, London or UK-based.
Salary £70k
About this client of ours & the role
Our client, a global Carbon Market leader, distributors of ratings via a SaaS Product, informing all market participants on how to price and manage risk, their ratings and research tools support buyers, intermediaries, investors and carbon project developers.
Founded in April 2020, 170+ strong team combines climatic and earth sciences, sell-side financial research, earth observation, machine learning, data and technology, engineering, and public policy expertise. We work from four continents. Having raised a significant Series B funding round in late 2022, they are rapidly growing as a company, accelerating the Net-Zero transition through ratings.
Job Description
Our client is hiring a Analytics Engineer to join their existing data products and tooling team, which sits within the broader data organisation. The team is focused on developing carbon
offset-related data products for our clients, as well as building internal data tools to increase the efficiency of our Ratings teams.
You’ll be our first Analytics Engineer, but sit within a bigger team of data engineers and
scientists. You’ll help us build robust data models for the carbon market domains and be a key contributor to our internal tooling and ratings scalability work. This is a cross-functional role: you will be working together with colleagues from our product, ratings, and software engineering team every day.
To give you a flavour of the kind of work you will be doing, these are some of the projects they’ve recently completed
-
Creating standardised data models for each type of carbon offsetting activity (such as avoided deforestation, renewable energy, and improved cookstoves projects). Developing ingestion pipelines that make this data available in our internal tools and our client-facing platform.
-
Collaborating closely with our rating analysts to standardise and automate quantitative analyses central to assessing renewable energy offsetting projects.
-
Rearchitecting our single dbt model project for our analytical warehouse into a modular project set-up, improve developer experience, improve data integrity and consistency, and reduce failure rates in production.
-
Improving data literacy and access across the organization by training and upskilling colleagues in their analytical skills and developing solutions for data documentation, dashboarding and cataloguing.
If you’re excited by working on such problems and making impactful contributions to data in the climate space, then we’re looking for you.
Tech stack
As a data team, there is bias towards shipping products, staying close to the internal and external customers, and end-to-end ownership of infrastructure and deployments. This is a team that follows software engineering best practices closely. The data stack includes the
following technologies:
-
AWS serves as their cloud infrastructure provider.
-
Snowflake acts as the central data warehouse for tabular data. AWS S3 is used for any of our geospatial raster data, and we use PostGIS for storing and querying geospatial vector data.
-
They use dbt for building SǪL-style data models and Python jobs for non-SǪL data transformations.
-
Computational jobs are executed in Docker containers on AWS ECS, they use Prefect as our workflow orchestration engine.
-
GitHub Actions for CI / CD.
-
Metabase serves as a dashboarding solution for end-users.
Responsibilities:
-
You will be an individual contributor in our data products team, focused on designing and building robust data pipelines for the ingestion and processing of carbon offset-related data.
-
You will develop robust data models, primarily in Snowflake and using dbt, to support our core ratings process, internal tools, client-facing platform, reporting and machine learning.
-
You will contribute to prioritising data consistency and governance issues across our ratings data domain.
-
You will work with our internal research and ratings teams to integrate the outputs of (analytical) data pipelines into the business processes products.
-
You will work with other teams in the business to enable them to be more efficient, by building data tools and automations.
You’ll be the ideal candidate if:
-
You are a highly collaborative individual who wants to solve problems that drive business value.
-
You have at least 2 years of experience building ELT/ETL pipelines in production for data engineering use cases, using Python and SǪL, and with using dbt in production.
-
You are comfortable with general data warehousing concepts, and SǪL and data modelling are second nature to you.
-
You have hands-on experience with a workflow orchestration tool (e.g., Airflow, Prefect, Dagster), containerization using Docker, and a cloud platform like AWS.
-
You can write clean, maintainable, scalable, and robust code in Python and SǪL, and are familiar with collaborative coding best practices and continuous integration tooling.
-
You are well-versed in code version control and have experience working in team setups on production code repositories.
Finally, this client is a remote-friendly company, many employees work fully remotely; however, for this position, they will only consider applications from candidates based in the UK.
If you live in or near London, you are welcome to work in the office, but it’s not required!