Solutions Architect, Infrastructure - Research Computing
US, NY, Remote, United States
Are you an experienced systems architect with an interest in advancing artificial intelligence (AI) and high-performance computing (HPC) in academic and research environments? We are looking for a Solutions Architect to join the higher education and research team! In this role you will work with universities and research institutions to optimize the design and deployment of AI infrastructure. Our team applies expertise in accelerated software and hardware systems to help enable groundbreaking advancements in AI, deep learning, and scientific research. This role requires a strong background in building and deploying research computing clusters, deploying AI workloads, and optimizing system performance at scale.
What you’ll be doing:
Technical advisor for the design, build-out, and optimization of university-level research computing infrastructures that include GPU-accelerated scientific workflows.
Work with university research computing to optimize hardware utilization with software orchestration tools such as NVIDIA Base Command, Kubernetes, Slurm, and Jupyter notebook environments.
Implement systems monitoring and telemetry tools to help optimize resource utilization, and track most demanding application workloads at research computing centers.
Document what you learn. This can include building targeted training, writing whitepapers, blogs, and wiki articles, and working through hard problems with a customer on a whiteboard.
Provide customer requirements and feedback to product and engineering teams.
What we need to see:
MS or PhD in Engineering, Mathematics, Physical Sciences, or Computer Science (or equivalent experience).
5+ years of relevant work experience.
Strong experience in designing and deploying GPU-accelerated computing infrastructure.
In-depth knowledge of cluster orchestration and job scheduling technologies, e.g. Slurm, Kubernetes,Ansible and/or Open OnDemand. And experience with container tools (Docker, Singularity, Enroot/Pyxis) including at-scale deployment of containerized environments
Expertise in systems monitoring, telemetry, and systems performance optimization of research computing environments. Familiarity with tools like Prometheus, Grafana or NVIDIA DCGM.
Understanding of datacenter networking technologies (InfiniBand, Ethernet, OFED) and experience with network configuration.
Familiarity with power and cooling systems architecture for data center infrastructure.
Ways to stand out from the crowd:
Experience in deploying LLM training and inference workflows in a research computing environment.
Experience working with technical computing customers in the academic research computing space.
Practical knowledge of high-performance parallel file systems.
Applications and systems-level knowledge of OpenMPI and NCCL.
Experience with debugging and profiling tools. E.g. Nsight Systems, Nsight Compute, Compute Sanitizer, GDB or Valgrind.
With highly competitive salaries, a comprehensive benefits package, and an excellent engineering work culture, NVIDIA is widely considered to be one of the industry's most desirable employers.
The base salary range is 148,000 USD - 230,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. ApplyJob Profile
Benefits Competitive salaries Comprehensive benefits Comprehensive benefits package Diversity Eligible for Equity Equity Equity and benefits Excellent engineering work culture Excellent work culture Work environment
Tasks- Design and optimize research computing infrastructures
- Document findings
- Education
- Implement monitoring tools
- Optimize hardware utilization
- Provide customer feedback
- Training
Accelerated Computing AI Ai infrastructure Ansible Artificial Intelligence Cluster orchestration Compute Compute sanitizer Data center Datacenter networking Debugging Deep Learning Deployment Docker Engineering Enroot Ethernet GDB GPU Gpu-accelerated computing Grafana High Performance Computing HPC Infiniband Job scheduling Kubernetes LLM Llm training Monitoring NCCL Network Configuration Networking Nsight compute Nsight systems NVIDIA Ofed Open ondemand Optimization Orchestration Performance Optimization Profiling Prometheus Pyxis Singularity SLURM Systems architecture Systems Monitoring Telemetry Training Valgrind Writing
Experience5 years
EducationArtificial Intelligence Computer Science Deep Learning Engineering Equivalent Equivalent experience Mathematics MS Ph.D. Physical sciences
TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9