Deep Learning Engineer
Disclaimer: This is an engineering position and not a data-scientist position. Please apply only if you consider yourself more like a developer rather than a data-scientist. You will not be exposed to the data of our clients to directly solve their problems. You will instead be responsible to build a distributed and scalable back-end to deliver state of the art trainings.
At deepomatic, we believe artificial intelligence is the way to unlock the world of tomorrow. We believe these technologies should be made accessible to all instead of being the privilege of a few big technology companies. We are developing a web platform used by Global 500 companies like Airbus, Valeo or Compass group to solve problems ranging from detecting cancers; making motorways smarter; to autonomous cars. We need talented and creative engineers to help us make that new world a reality.
Your main goal will be to design and build the second version of our training and data visualisation pipeline. Here are a few examples of the challenges you might be confronted to:
- What's the best way to allow our clients to ship their custom neural network architectures in our pipeline ?
- How to effectively implement meta-parameter optimization ?
- How to guarantee reproducibility and retro-compatibility ?
- What is the best heuristic to automatically pre-annotate a dataset with the lowest amount of training data ?
- How to setup a system to detect outliers in production to ship them back in the platform for annotation ?
- Contribute to our deep learning platform by adding exciting new capabilities to it.
- Build modern & robust ways to manage deep learning trainings in a distributed environment based on micro-services.
- Handle the scalability of our platform by improving the efficiency of systems.
- 3+ years of work experience
- Experience with deep learning is required
- Experience with Python, Tensorflow and Docker is required
- Great human qualities and a love for team work
- Great oral and written communication in English
- (optional) Experience with tools for distributed architectures is a plus (e.g.: Celery, Hadoop, etc...)