* The person we are looking for will become part of Data Science and AI Competency Center working in AI Engineering team. The key duties are:
* Design, deliver and scale GenAI solutions
* Practical and innovative implementations of LLM/ML/AI automation, for scale and efficiency
* Working with Data Science teams to implement AI Agents and Machine Learning models into production
* Design, delivery and management of industrialized processing pipelines
* Implementing AI /MLOps/LLMOps frameworks and supporting Data Science teams in best practices
* Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations
* Defining and implementing best practices in ML models life cycle and ML operations/LLM operations
* Gathering technical requirements & estimating planned work
* Presenting solutions, concepts and results to internal and external clients
* Creating technical documentation
* At least 4+ years of Data engineering experience with last 1 year-experience in building Data processing
* At least 4+ years of experience in production-ready Python code development (e.g., microservices, APIs, etc.)
* At least 1+ years of experience with GenAI (various LLM models, agents, RAGs, prompt engineering, MCP, specification-driven-development)
* At least 2+ years of experience in production-ready ML-related code development
* Additionally for all levels:
* Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures
* Good understanding of Cloud concepts and architectures, as well as working knowledge with selected cloud services, preferably Azure or GCP
* Experience in designing and implementing data pipelines
* Good communication skills
* Ability to work in a team and support others
* Taking responsibility for tasks and deliverables
* Great problem-solving skills and critical thinking
* Fluency in written and spoken English.
* Nice to have skills & knowledge:
* Experience with LangGraph, FastAPI, CosmoDB, Redis, SpyGlass, Kubernetes
* Experience in designing, programming ML algorithms, and data processing pipelines using Python
* Experience in at least one of following domains: Data Warehouse, Data Lake, Data Integration, Data Governance, Machine Learning, Deep Learning, MLOps
* Practical experience in MLOps/LLMOps tools like AzureML/AzureAI (or GCP equivalents)
* Practical experience with Databricks
* Practical experience in Spark/PySpark and Hive within Big Data Platforms like Databricks, EMR or similar
* Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab, or Azure DevOps)
* Experience in productizing ML solutions using technologies like Spark/Databricks or Docker/Kubernetes.