FreshRemote.Work

Senior Machine Learning Engineer

Remote job

Hello, let’s meet!

We are Xebia - a place where experts grow. For nearly two decades now, we've been developing digital solutions for clients from many industries and places across the globe. Among the brands we’ve worked with are UPS, McLaren, Aviva, Deloitte, and many, many more.

We're passionate about Cloud-based solutions. So much so, that we have a partnership with three of the largest Cloud providers in the business – Amazon Web Services (AWS), Microsoft Azure & Google Cloud Platform (GCP). We even became the first AWS Premier Consulting Partner in Poland.

Formerly we were known as PGS Software. In 2021, we joined Xebia Group – a family of interlinked companies driven by the desire to make a difference in the world of technology.

Xebia stands for innovation, talented team members, and technological excellence. Xebia means worldwide recognition, and thought leadership. This regularly provides us with the opportunity to work on global, innovative projects.

Our mission can be captured in one word: Authority. We want to be recognized as the authority in our field of expertise.

What makes us stand out? It's the little details, like our attitude, dedication to knowledge, and the belief in people's potential - emphasizing every team members development. Obviously, these things are not easy to present on paper – so make sure to visit us to see it with your own eyes!

Now, we've talked a lot about ourselves – but we'd love to hear more about you.

Send us your resume to start the conversation and join the #Xebia.

You will be:

  • responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems,
  • building and maintaining architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies,
  • evaluating and translating technical design to workable technical solutions/code and technical specifications at par with industry standards,
  • driving creation of re-usable artifacts,
  • establishing scalable, efficient, automated processes for data analysis, data model development, validation, and implementation,
  • working closely with analysts/data scientists to understand impact to the downstream data models,
  • writing efficient and well-organized software to ship products in an iterative, continual release environment,
  • contributing and promoting good software engineering practices across the team,
  • communicating clearly and effectively to technical and non-technical audiences,
  • defining data retention policies,
  • monitoring performance and advising any necessary infrastructure changes.

Requirements

Your profile:

  • ability to start immediately,
  • openness to work daily between till 19.00 pm CET,
  • university or advanced degree in engineering, computer science, mathematics, or a related field,
  • 7+ years experience developing and deploying machine learning systems into production,
  • experience working with a variety of relational SQL and NoSQL databases,
  • experience working with big data tools: Hadoop, Spark, Kafka, etc.,
  • experience with at least one cloud provider solution (AWS, GCP, Azure) and understanding of severless code development
  • experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.,
  • previous experience developing predictive models in a production environment, MLOps and model integration into larger scale applications,
  • experience with Machine and Deep Learning libraries such as Scikit-learn, XGBoost, MXNet, TensorFlow or PyTorch,
  • exposition to GenAI and solid understanding of multimodal AI via HuggingFace, Llama, VertexAI, AWS Bedrock or GPT,
  • knowledge of data pipeline and workflow management tools,
  • expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, code reviews, design documentation,
  • working experience with native ML orchestration systems such as Kubeflow, Step Functions, MLflow, Airflow, TFX,
  • very good verbal and written communication skills in English.

Work from the European Union region and a work permit are required.


Nice to have:

  • relevant working experience with Docker and Kubernetes.


Recruitment Process:

CV review – HR call – InterviewClient Interview – Decision

Apply

Job Profile

Restrictions

Work daily until 19 Work from the European Union region

Benefits/Perks

Innovative projects Professional development Remote work

Tasks
  • Architecture patterns
  • Automated processes
  • Data model development
  • Data processing
  • Data retention policies
  • Good engineering practices
  • Infrastructure design
  • Performance monitoring
  • Reusable artifacts
  • Software development
  • Technical solutions
Skills

Airflow AWS AWS Bedrock Azure Big Data C++ Cloud Computing Communication Data analysis Data Pipeline Data processing Docker English GCP GenAI GPT Hadoop Hugging Face Java Kafka Kubeflow Kubernetes Llama Machine Learning MLOps MXNet NoSQL Orchestration Python PyTorch Scala Scikit-learn Software Engineering Spark SQL TensorFlow Testing TFX Vertex AI Workflow Management XGBoost

Experience

7 years

Education

Advanced degree Computer Science Engineering Mathematics Related Field University Degree

Timezones

UTC+1