Design and implement scalable data models and transformations using dbt within the Microsoft Fabric ecosystem.
Collaborate with business stakeholders to understand data needs and deliver high-quality datasets.
Ensure data quality, consistency, and governance through testing, documentation, and version control.
Optimize performance of data transformations and queries across large datasets.
Monitor and troubleshoot data workflows and proactively resolve issues.
Contribute to the development of best practices and standards for data modeling, transformation, and source control.
Requirements
3+ years of experience in data engineering, analytics engineering, or a related field.
Proficiency with SQL, ideally in dbt (Core or Cloud) for data transformation and modeling.
Hands-on experience with Microsoft Fabric, including OneLake, Lakehouse, Warehouse, and Power BI artifacts.
Strong SQL skills and experience working with large-scale data warehouses or lakehouses.
Familiarity with Git-based version control and CI/CD practices.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills.
Experience with Power BI semantic models and DAX.
Nice-to-have
Experience with data models for SAP ECC, Oracle EBS, and/or QAD strongly preferred.
Experience with data models for Salesforce, GetPaid, iNexus, Concur, SAP S4/HANA, SAP SuccessFactors, Hyperion Financial Management, Hyperion Planning (PBCS), Siemens Teamcenter.
Knowledge of data governance, lineage, and observability tools.
Familiarity with Python or other scripting languages.
Experience working in Agile or DevOps environments.