bluelightconsulting logo

Data Engineer, Azure - Remote, Latin America

bluelightconsulting

BoliviaFull-TimePosted 0 day(s) ago$0-$0 / yr

$0-$0 / yr

Salary

bolivia

Region

ASAP

Start Date

About bluelightconsulting

No company information provided.

About this Role.

Bluelight is a leading software consultancy dedicated to designing and developing innovative technology that enhances users' lives. With a steadfast commitment to delivering exceptional service to our clients, Bluelight excels in its focus on quality and customer satisfaction. Our mission is not only to create cutting-edge applications but also to foster a collaborative and enriching work environment where each team member can grow and thrive. With a presence across the United States and Central/South America, Bluelight is in an exciting phase of expansion, continually seeking exceptional talent to join its dynamic and diverse community. We are looking for a skilled individual to join our rapidly growing team at Bluelight. This position is ideal for someone who thrives in a fast-paced, dynamic environment where everyone's opinions and efforts are valued and appreciated. You will have the opportunity to contribute to challenging and meaningful projects, developing high-quality applications that stand out in the market. We value continuous learning, personal growth, and hard work, offering a collaborative environment that promotes professional development. If you are passionate about software development and eager to be part of a growing software consultancy, we invite you to apply and join us on this exciting journey. Key Responsibilities

  • ETL Data Engineering: Develop and maintain ETL data engineering processes using Python (PySpark) within Azure Synapse Analytics Notebooks, and/or Azure Synapse Analytics Pipelines, to ensure efficient data extractions, transformation, and loading.

  • Data Warehousing: Apply your expertise in data warehousing, understanding star schemas, facts, and dimensions, to design and build effective data storage structures in a Massively Parallel Processing (MPP) SWL Pool.

  • Data Source Expertise: Extract data from various sources, including REST APIs, SWL database tables, and CSV files.

  • Azure Synapse Analytics Expertise: Utilize your deep knowledge of Azure Synapse Analytics to design and optimize data notebooks/pipelines for scalability and performance.

  • Data Fabric Concepts: Contribute to the implementation and understanding of other Data Fabric concepts, such as data lakes, lakehouses, delta lakes, and data cataloging, to enhance data management capabilities.

  • Data Modeling: Collaborate with data architects to create data models and schemas that align with business requirements.

  • Data Quality: Implement data quality checks and validation processes to maintain data accuracy and consistency.

  • Performance Tuning: Identify and resolve performance bottlenecks and optimize ETL data notebooks/pipelines to meet SLAs.

  • Monitoring and Troubleshooting: Monitoring ETL jobs, diagnose issues, and implement solutions to ensure data pipeline reliability.

  • Documentation: Maintain comprehensive documentation of ETL data engineering processes, data flows, and data transformations.

  • Collaboration: Work closely with cross-functional teams to understand data requirements and provide support for data-related initiatives.

  • Security and Compliance: Ensure data security and compliance with data governance and privacy standards.

Required Skills & Qualifications

  • Bachelor’s degree in Computer Science, Information Technology, or a related field; or equivalent work experience, with certifications related to data engineering or data science (e.g. Azure Data Engineer) being a plus.

  • Proven experience in ETL data engineering with significant expertise in using Python (PySpark) to perform data extraction, transformation, and loading from REST APIs, SQL database tables, and CSV files.

  • Proficiency in using Azure Synapse Analytics resources including Notebooks, Pipelines, Linked Services, and Azure Key Vault.

  • Demonstrated ability to write complex SQL queries, optimize query performance, and work with both SparkSQL and MS SQL to effectively extract, transform, and load data.

  • Knowledge of data integration best practices and tools.

  • Experience with version control systems, such as Git (Azure DevOps).

  • Strong problem-solving and analytical skills, with a keen attention to detail.

  • Excellent communication skills, both verbal and written, with the ability to work collaboratively in a team environment with shifting priorities.

  • Familiarity with big data technologies, machine learning, and data analysis preferred.

  • Experience with data visualization tools (e.g. Power BI, Tableau) and Agile Methodologies a plus.

Company Benefits

  • Competitive salary and bonuses, including performance-based salary increases.

  • Generous paid-time-off policy

  • Flexible working hours

  • Work remotely

  • Continuing education, training, conferences

  • Company-sponsored coursework, exams, and certifications

Being a consultant in our team is a fun, challenging, and rewarding career choice. Your contributions are highly valued by clients, and the work you do often has a direct and significant impact on their business.You will have the opportunity to work on a variety of projects for our incredible clients, which will accelerate your career growth. You’ll collaborate with modern technologies and work alongside some of the best professionals in the industry!If you’re eager to be part of an exciting, challenging, and rapidly growing consultancy, we encourage you to apply. #LI-Remote

Skills Required

Benefits & Perks

Ready to Apply?

Apply Now

Similar jobs

No similar jobs found.