Job Title: Data Engineer (London Market Insurance)
About the Role
We are seeking an experienced Data Engineer with strong London Market insurance experience to support a Lloyd’s syndicate in delivering a high-quality, scalable data warehouse and analytics platform.
This role focuses on building and maintaining SQL-based data solutions that support underwriting, claims, actuarial, exposure management, and regulatory reporting across the syndicate.
You will play a key role in designing robust data models, building ETL/ELT pipelines, and ensuring accurate, well-governed data flows across complex London Market datasets including bordereaux, premiums, claims, and Lloyd’s reporting requirements.
Key Responsibilities
- Design and develop SQL-based data warehouse solutions (T-SQL)
- Build and maintain ETL/ELT pipelines using SSIS, Azure Data Factory, and Databricks
- Develop dimensional models (star schema / Kimball approach) for insurance analytics
- Work with London Market datasets including bordereaux, claims, premiums, and policy data
- Support Lloyd’s regulatory reporting (CMR, QMA/QMB, PMDR, etc.)
- Implement data quality checks, reconciliation, and data governance controls
- Support integration of data from underwriting, claims, finance, and external market sources
- Collaborate with underwriting, actuarial, and claims teams to translate requirements into data solutions
- Support migration of legacy systems into Azure-based data platforms (Synapse, Data Lake, ADF)
Required Experience
- Strong experience in London Market insurance (Lloyd’s syndicate, MGA, insurer, or broker)
- Solid SQL and data warehouse development experience
- Experience with bordereaux processing and insurance data structures (claims, premiums, policy data)
- Experience building ETL/ELT pipelines and dimensional data models
- Familiarity with Lloyd’s reporting and regulatory data requirements
- Experience working with Azure data services (Synapse, Data Lake, Data Factory)
Desirable
- Experience with Power BI or other BI tools
- Python or PySpark for data processing
- Exposure to CI/CD and DevOps (Azure DevOps)
- Knowledge of data governance, lineage, and data quality frameworks