Overview
ARKA Group L.P. (“ARKA”) is an advanced technologies company serving the U.S. military, intelligence community, and commercial space industry delivering next-generation solutions to support the national security space enterprise. Built on more than six decades of excellence, ARKA brings modern approaches and a culture of innovation to the challenges of today. Join the ARKA team to learn how Beyond Begins Here. Discover your next career opportunity now!
Position Overview: The Staff AI/ML Engineer (LLMs) will lead the development of Agentic AI capabilities and other LLM based capabilities for a multitude of mission management applications.
We offer generous relocation benefits for eligible candidates. In support of work/life balance, many positions are available for a flexible schedule within the pay period. Ask us about the opportunity for flex scheduling if that’s of interest to you.
Responsibilities: Lead and mentor a multidisciplined team consisting of developers and researchers to implement machine learning algorithms to solve a broad set of challenges for our various customers Lead and mentor a multidisciplinary team delivering advanced AI/ML solutions
Apply LLMs to complex domain-specific problems and operational workflows
Adapt and fine-tune foundation models for specialized use cases
Design and implement retrieval-augmented generation (RAG) systems and semantic search architectures
Build production-grade LLM applications and agentic systems
Deploy scalable AI solutions across cloud, on-prem, and hybrid environments
Analyze large, multi-modal datasets to extract meaningful features and actionable insights
Translate emerging research into applied, mission-relevant capabilities
Communicate technical strategy, status, and risks to internal and external leadership
Required Qualifications: B.S. in machine learning, computer science, mathematics, or related fields
8+ years of experience, preferably in software development or as a data scientist with 2+ years of building LLM applications using some of the following:
Fine-tuning foundational models
Steering Techniques (e.g Sparse auto encoders, representation tuning)
Building adapters to use foundational models (e.g. PEFT, llama factory)
Prompt engineering techniques / Inference time techniques (e.g. chain of thought, tree of thoughts, etc.)
Using Retrieval Augmented Generation techniques to populate and query vector databases (e.g. Weaviate, pinecone, pgvector)
Using LLM Frameworks (e.g. LangChain, DSPy, Microsoft Agent Framework)
Using AI APIs ( e.g AWS Bedrock, OpenAI)
Using LLM deployment frameworks (eg llama.cpp, vllm, tgi)
Developing UIs with ReAct
Experience leading an interdisciplinary team of researchers and software developers and working with a program manager to define project scope and schedule to ensure we meet project milestones as defined by our customers
Experience with Python and data science / machine learning libraries (e.g. NumPy, Pandas, Polars, scikit-learn, etc.)
Experience contributing on a team using version control (e.g. git, GitLab, Bitbucket)
Active TS/SCI U.S. Government Security Clearance
Preferred Qualifications: M.S. or PhD in machine learning, computer science, mathematics, or related fields
Experience leading an interdisciplinary team of researchers and software developers
Experience with any of the following:
Large Language Models and experience identifying ways to incorporate them into new domains and applications
Applying Transformer-based architectures to domains in other areas outside of Natural Language Processing (NLP) such as computer vision
Natural Language Processing algorithms such as BERT
Reinforcement learning and familiarity with Gymnasium Gym, OpenEnv, TorchRL, RLlib, and Stable Baselines
Applying clustering algorithms and/or deep neural networks to real life problems
Implementing tracking and pattern-of-life algorithms
Experience with GenAI Ops techniques (e.g. LLM-as-a-judge) and frameworks (e.g. LangFuse, MLFlow, Arize Phoenix)
Experience with Machine Learning libraries and frameworks such as HuggingFace and LangChain
Experience with Linux
Experience with CUDA and Python libraries such as CuPy, Numba, CuSignal, CuDF, etc.
Familiarity with using AWS cloud computing resources such as EC2, S3, Lambda, etc.
Experience with any of the following additional languages: Java, C++, Rust, Go, and/or C#
Experience in application deployment, virtualization, and containerization (e.g. Podman, Docker, Kubernete
Company:
ARKA Group LP
Qualifications:
Language requirements:
Specific requirements:
Educational level:
Level of experience (years):
Senior (5+ years of experience)