For over 20 years, Global Relay has set the standard in enterprise information archiving with industry-leading cloud archiving, surveillance, eDiscovery, and analytics solutions. We securely capture and preserve the communications data of the world’s most highly regulated firms, giving them greater visibility and control over their information and ensuring compliance with stringent regulations.
Though we offer competitive compensation and benefits and all the other perks one would expect from an established company, we are not your typical technology company. Global Relay is a career-building company. A place for big ideas. New challenges. Groundbreaking innovation. It’s a place where you can genuinely make an impact – and be recognized for it.
We believe great businesses thrive on diversity, inclusion, and the contributions of all employees. To that end, we recruit candidates from different backgrounds and foster a work environment that encourages employees to collaborate and learn from each other, completely free of barriers.
We encourage you to apply if your qualifications and experience are a good fit for any of our openings.
As a Senior Data Scientist in the AI group, you will be building models with the goal of detecting bad behaviour in the financial sector. We want to ultimately detect and prevent fraud. We have a long road ahead of us and there is a lot of interesting work we have to do to achieve our goals. Our AI group remains grounded with focus and purpose from Global Relay’s startup days, while benefiting from being a mature, medium-sized company. We are looking for great people to join our team to build solutions that have an important impact.
Working closely with other data scientists and developers in building and deploying various machine learning models for Global Relay’s customers
Being a subject matter expert on current machine translation (MT) techniques
Interacting with product managers on enhancements to our core products
Executing all steps in the data science process from understanding business requirements to deploying models
Producing reports detailing model performance
5+ years of experience with solving machine learning problems
Experience working with very large data sets in an enterprise-wide application environment
Python and Bash experience
Knowledge of common machine learning libraries such as: Scikit-learn, TensorFlow, PyTorch, NLTK, SpaCy
Understand and appreciate tradeoffs of different methods for MT
An understanding of different neural network architectures as applied to MT such as: Transformers, LSTMs, RNNs and CNNs
Strong organizational and communication skills
Nice to Haves:
MSc or PhD in a STEM or Linguistics subject
Data collection and cleaning experience
Data engineering skills
Machine learning experience in different modalities
Experience working with explainable and interpretable models
Specialized machine learning libraries like: Seq2Seq, Nemo, Fairseq
Many-to-Many multilingual translation
Multi-task / Transfer-learning
Model optimization: pruning, distillation, quantization
Kubernetes and micro services
Working in an agile development environment
Level of experience (years):
Senior (5+ years of experience)
How to apply:
Please mention NLP People as a source when applying