Mondra is a high-growth tech start-up that is delivering a food system environmental insights platform to accelerate the planet’s progress to NetZero carbon emissions.
The Senior Data Engineers are responsible for designing and implementing technical solutions which follow Mondra’s business value objectives.
· Build robust systems and reusable code modules to solve problems across the team with an eye on the long-term maintenance and support of the application.
· Work with the latest open-source tools, libraries, platforms and languages to build data products enabling other analysts to explore and interact with large and complex data sets.
· Partner with cross functional teams in a collaborative and agile environment
· Collaborate across Mondra to enable and share best practices and reusable and scalable tools and code for our analyst community.
· Help establishing Data and Technology objectives, reusable patterns and reference architecture for common problems.
· Mentor other peers and develop technical knowledge and skills to keep the enterprise on the cutting edge of technology including those who work directly to you and the wider team.
· Assemble large, complex structured and non-structured data sets that meet functional / non-functional business requirements.
· Identify, design, and implement internal process improvements: optimal data pipeline architecture, automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability and performance, etc.
· Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Apache Spark, Databricks and other Azure ‘big data’ technologies.
· Work with stakeholders including the Executive, Product, Data, Engineering and Operational teams to assist with data-related technical issues and support infrastructure needs.
· Keep our data separated and secure across international and national boundaries through multiple data centres and Azure regions.
· Create data tools for operational analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
· Work with data and operational experts to strive for greater functionality in our data systems.
· Know how to work with Power BI and python data visualisation libraries, such as Plotly, Streamlit, Seaborn, etc.
· Bachelor’s degree in Computer Science, Electrical Engineering, Software Engineering, Computer Information Systems, Engineering, Communications Technology, or a related field of study
· Six (6) years of experience in the position offered or in a related consulting, analyst, or engineer role. Alternately
o The experience must include at least six (6) years of experience working with big data systems, or relational databases and data mining
· Four (4) years of experience with data engineering programming language (e.g. Python, R, Scala, SQL and/or SAS)
· Two (2) years of experience with cloud computing and/or DevOps.
· Experience with big data tools: Databricks, Hadoop, Spark, Kafka, etc.
· Experience with relational SQL and NoSQL databases, including MySQL, SQL Server, etc.
· Experience in working with data modelling (Star / Snowflake schemas, SCD Types), data processing pipelines (ETL / ELT with Databricks jobs or Azure Data Factory), and workflow management tools.
· Three (3) years of experience with Power BI
· Fluent in English
· Experience with Azure cloud services (preferred but not essential)
· Experience in dealing with hierarchical data or Graphs data structures (preferred but not essential)
· Experience in Spark / Delta Tables performance optimizations (preferred but not essential)