Data Engineer

London, UK

Job overview

We are looking for a dynamic Data Engineer to join our growing team of analytics experts. The candidate will be responsible for shaping and developing our data infrastructure as well as optimising data collection and transformation for cross functional teams.

The ideal candidate is an experienced pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up. They will support our software developers, DevOps, analysts and data scientists on information driven initiatives. Working alongside several other engineers, they will ensure optimal data delivery, integrity and data protection compliance across multiple ongoing projects. They must be self-directed and comfortable supporting the needs of many users, systems and products.

Sliide is a dynamic and multicultural place to work which welcomes those who are smart, passionate, driven and friendly. We're excited to welcome an Engineer able to support our next generation of products and data initiatives, whilst understanding the business objectives of the company.

Responsibilities

  • Create and maintain data pipelines, data lake and ETL architecture
  • Assemble and analyse large, complex data sets that meet both functional and non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes; optimising data delivery; re-designing infrastructure for greater scalability; etc
  • Work with stakeholders including the Executive, Tech, Product, Data and Design teams to optimise data driven decisions
  • Keep our data separated and secure across national boundaries through multiple cloud platforms
  • Create tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader
  • Ensure data integrity with thorough test coverage and data validation
  • Improve and create CI/CD processes and tooling

Requirements

  • Advanced SQL
  • Experience setting up Airflow and creating DAGs using operators
  • Experience with container orchestration systems
  • Advanced experience using Python for "big data" pipelines
  • IaC e.g Terraform
  • A minimum of 2 years of experience in a Data Engineer or DevOps role
  • Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, Sagemaker
  • Strong project management and organisational skills
  • Good Communication skills
  • Self-starter

Preferred Experience

  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience with big data tools such as: Hadoop, Snowflake, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, such as Postgres, BigQuery, dynamoDB, BigQuery, Snowflake
  • Experience with stream-processing systems: Kinesis, Storm, Spark-Streaming, etc

Package

  • Competitive salary + EMI share options

Benefits

  • Buzzing London office with a vibrant culture and well-stocked snack cupboards and drinks fridge
  • Regular all-company socials
  • Weekly company fitness sessions
  • 25 days of holiday + bank holidays
  • Flexible working
  • Full cover private healthcare
  • Up to 8% employer pension contribution
  • Monthly company hair cuts
  • Cycle to work scheme
  • Budget for learning resources, courses and conferences

Equal Opportunity

We are hugely committed to equality of opportunity. We employ many nationalities and think it's extremely important to have both genders represented in all functions and levels. All individuals will be treated in a fair and equal mannerand in accordance with the law regardless of age, disability, gender, pregnancy andmaternity, marital status, race, religion or sexual orientation.

Apply

Thanks, we'll be in touch