Performance Test Engineering Technical Lead
Remote - USA
Full Time

Dragos, Inc.
Do you enjoy pushing a platform to its breaking point? Are you analytical, an ace with communication both written and verbal, and love gathering and presenting statistics to present to team members of all levels? Do you enjoy mentoring other performance engineers, and helping build a team that is building the most robust, performant ICS/OT Security platform in the world? Would you take pride in knowing that you work is contributing to a greater mission with global impact? How’d you like to do all of this from the comfort of your own home? Dragos has an opportunity for an Performance Test Engineering Technical Lead to join our growing Quality Engineering team and make great contributions to our mission of Safeguarding Civilization. As a Performance Test Engineering Technical Lead, reporting to the Manager of Quality Engineering, you will help lead a team of 2 Senior Performance Engineers in the design and execution of both application and TCP/IP performance tests, manual and automated. You will document and design tests and dashboards to report the status of those tests on a regular cadence to Engineering and Product team members. You will collaborate with Product and technical SMEs to define application performance SLAs and KPIs. Many of performance tests you are responsible for will be implemented into our build pipeline.
Our headquarters is located in Hanover, MD, and you have the flexibility of either working from home or out of our office post-COVID. Must be eligible for work in and live within the United States.
Our headquarters is located in Hanover, MD, and you have the flexibility of either working from home or out of our office post-COVID. Must be eligible for work in and live within the United States.
Responsibilities
- Manage and mentor a team of test leads to define, plan, coordinate, and execute performance and stability testing between different application and domain teams
- Creating performance test plans, test data, SLAs, analysis reports
- Collaborate with application architects, business and technical SME's to define and test application performance SLA's and KPI's
- Identify opportunities to improve test capabilities and procedures to achieve increased efficiency
- Provide Program Level reporting of Performance testing status/issues/risks to team and management
- Tracks key QA milestones at a program level and feed into overall program milestones
- Be responsible for test artifacts and documentation and audit compliance in alignment to organizational standards
- Proven Analytical and problem-solving abilities to effectively prioritize & execute tasks in a high-pressure and time-to-market environment
Requirements
- 8+ years' experience in Performance Engineering & Performance Testing and 2 to 4 years in a Performance Architect/Lead or Manager role
- Experience in tool setup, performance scripting, dashboard integration and shifting performance testing into CI/CD pipeline
- Strong UNIX/Linux skills, from administrative/management perspective
- Expertise in Performance tuning microservices and networks
- Demonstrated expertise with and understanding of TCP/IP, including routers, switches, firewalls, and familiarity with the OSI Network Model and how it relates to Linux/UNIX components.
- Understanding of x86 architecture, hardware/software interactions, and impacts that HW configurations may have on software performance (i.e. NUMA node optimizations, CPU core affinities, etc.)
Preferred
- Experience with one or more coding language (JAVA, etc) with some Development background will be an advantage
- Experience with Javascript and Kotlin programming languages
- Experience with virtualization and hypervisors like VMWare ESX, KVM, Microsoft Hyper-V, Xen, and container technologies such as Docker and Kubernetes
- Experience with SauceLabs or BrowserStack
- Previous work with an ICS/Internet security product back-end
- Experience with Intrusion Detection Software such as Snort, Seek/Bro, or Suricata
- Strong experience with IXIA or T-REX, for traffic generation,
- Experience with administration, monitoring and tuning of big data application stacks and pipelines, i.e. Elastic Search, Mongo, Nifi, Redis, RabbitMQ
Performance Objectives
- 30 days: Have a basic understanding of Dragos’s platform, dependencies, and knowledge of how the Quality Practice works at Dragos, able to take ownership of Test Rail suites and generate initial reports
- 90 days: Be able to autonomously conduct continued performance evaluations and provide data-driven suggestions to improve platform performance and stability
- 180 days: Team has automated performance tests as part of build pipeline. Demonstrated leadership, has grown performance engineers in capability and skillset to understand and write tests to meet requirements. Proactively sends reports to interested parties and able to answer questions from a technical and nontechnical standpoint, up to executive level.
- 365 days: Seen as the SME in application and network performance, proactively finding areas in platform for driving and improving performance through process, tooling, and configuration. Partners with Product and Engineering to evaluate and incorporate these changes into the product
Job tags:
Back-end
Big Data
CI/CD
Compliance
Cybersecurity
Data-driven
Docker
Elastic
Java
JavaScript
KPIs
Kubernetes
Linux
Mentor
Mentoring
Node
Redis
Security
SLA
SME
Statistics
Training
Unix
Job region(s):
North America