Data Engineer | On-site | George, Western Cape
| Remuneration: | market-related |
| Location: | George |
| Job level: | Mid |
| Type: | Permanent |
| Reference: | #BH-453 |
| Company: | Badger Holdings |
Job description
Data Engineer | On-site| George, Western CapeBuild the data platform you wish you’d inherited
We’re in the middle of a major data platform modernisation and this is your chance to help shape it.
This isn’t about maintaining legacy pipelines. It’s about designing and building a scalable, cloud-native data platform that works. Reliable, well-structured, and ready for advanced analytics.
If you enjoy solving complex data problems, working with modern tools, and having a real say in how things are built, you’ll feel at home here.
What you’ll be doing (and why it matters)You’ll play a key role in building the foundation that powers data across the business.
Working closely with analysts, data scientists, and stakeholders, you’ll design and deliver data solutions that turn raw data into something trusted, usable, and impactful.
A big part of your focus will be helping us move from legacy systems to a modern stack using Estuary, dbt, GCP, and Snowflake, not just executing the migration, but helping define how we do things going forward.
Your day-to-day will include:- Designing and building scalable ELT pipelines from multiple data sources
- Developing clean, well-structured data models using dbt
- Optimising data models and workloads within Snowflake
- Transforming raw data into high-quality, analytics-ready datasets
- Implementing data quality checks, monitoring, and observability
- Improving performance and managing cloud cost efficiency
- Applying engineering best practices (CI/CD, testing, code reviews)
- Collaborating with cross-functional teams to deliver practical, scalable solutions
- Contributing to standards, documentation, and mentoring junior engineers
What you bring
You’re someone who takes ownership, enjoys solving problems, and cares about doing things properly, not just quickly.
Requirements:- 4–6+ years’ experience in Data Engineering or a similar role
- Strong SQL skills and experience working with complex transformations
- Proficiency in Python (or similar) for data processing and automation
- Experience building ELT pipelines in cloud environments (AWS, Azure, or GCP)
- Hands-on experience with modern data warehouses (e.g. Snowflake, Synapse)
- Familiarity with tools like dbt, Airflow, or Dagster
- Experience working with structured and semi-structured data (JSON, Parquet)
- A solid understanding of data governance, security, and best practices
Bonus if you’ve worked with:- Real-time/streaming technologies (Kafka, Spark Streaming)
- BI tools like Qlik, Power BI, Looker, or Tableau
- Qlik Replicate / Compose or similar tools
- Agile delivery environments (Jira, Azure DevOps)
- Cloud certifications or exposure to DevOps practices
Why this role is worth your time- You’ll help build (not inherit) a modern data platform
- You’ll work with a forward-looking tech stack (dbt, Snowflake, GCP)
- You’ll have real input into architecture, standards, and best practices
- You’ll be part of a growing data function where your impact is visible
- You’ll have the opportunity to mentor and shape how the team evolves
Ready to build something that actually works?
Apply now and be part of shaping the future of our data platform.
Posted on 04 May 12:14, Closing date 2 Jun