Remote Machine Learning Jobs

Last Week

Software Engineer, Behavior Planning
Voyage  
data science machine learning linux cpp May 26
Voyage is delivering on the promise of self-driving cars.

Voyage has built the technology and services to bring autonomous transportation to those who need it most, beginning in retirement communities. Whether residents face mobility restrictions or just want to take a ride, Voyage takes pride in getting all our passengers to their destination safely, efficiently, and affordably. Our journey begins in calmer communities, but we won't stop until anyone, anywhere can summon a Voyage.

The Voyage Behavior Planning Team is responsible for developing algorithms that allow the vehicle to take the best actions. Based on the output of our Motion Prediction module, Behavior Planning’s task is to find the best motion plan that the vehicle should follow in order to make progress, while keeping the trip both safe and comfortable. You will develop models to encode typical vehicle behavior, including models to handle lane changes, intersections, and similar actions. 

As part of the broader Autonomy Team, you will also interact on a daily basis with other software engineers to tackle highly advanced AI challenges. All Autonomy Team members will work on a variety of problems across the autonomy space, contributing to the final goal of building the most advanced autonomous driving technology available for communities around the world.

Responsibilities:

  • Design models to handle how other road users interact with our car. Evaluate the performance of such models on real-world and simulated data sets
  • Dive into data, explore, uncover and understand the behaviors of road users such as cars, bikes, golf carts, and pedestrians; leveraging machine learning and statistics where appropriate
  • Architect and implement decision making algorithms into production-level code
  • Work closely with developers from planning, infrastructure, localization, and perception teams to debug, fine-tune, and deploy production systems

Requirements:

  • 3+ years of industry experience with fluency in C++, including standard scientific computing libraries
  • Experience using modern software engineering tools (e.g., version control, CI, testing)
  • Strong applied math background (linear algebra, statistics, probability)
  • Familiarity with any of (task planning, motion planning, motion prediction, controls)
  • Practical experience in data science, modeling, and analysis of large datasets is a huge plus
  • Experience with software system architecture design
  • Experience in Linux environments is desired
We are an equal opportunity employer and value diversity at our company. Women, people of color, members of the LGBTQ community, individuals with disabilities, and veterans are strongly encouraged to apply. 
Share this job:

This Month

Independent & Self-motivated Python Backend Developer
Seez  
python-3.x postgresql linux docker odoo python May 21

Seez is growing and taking on more projects, and we're looking for some new bright colleagues to help us with the challenging projects. We are focusing on the backend and are looking for some strong colleagues to work on python development projects. Our entire backend architecture is heavily built with python and you're expected to know your way around the language.

We have a core backend pipeline that runs webscrapers on a daily basis, aggregating data from multiple sources in multiple countries. This data is feed through a pipeline that cleans and updates the data continuously. This aggregated data can be viewed by our users on the Seez apps. We apply machine learning on multiple projects. The new projects will build upon the current architecture and some will be completely independent projects that may re-use some of the existing infrastructure.

A quick overview of our backend technology stack:


  1. Python 3 (Flask, FastAPI, Scrapy)

  2. Redis and Celery

  3. PostgreSQL

  4. Docker

  5. Linux

  6. Firebase

  7. Odoo ERP


You will have freedom, you will be challenged, and you will not be shut down due to bureaucracy, but your voice will be heard. You are expected to be able to handle multiple tasks and shouldn't be afraid to take on challenges. However, you are expected to consider the challenges and be able to propose feedback. You will be part of a team and you are expected to pull on your teammates when need, but also be able to run on your own and actively seek feedback on projects.

We value communication very highly and you will be expected to participate and join discussions in product development as well as company direction, lastly, you are not a code monkey. We have a very free and open minded culture and are preferably looking for someone with the same mentality.

We value growing our team and there will be plenty of opportunities for you to jump into our other projects and get familiar with our other projects. You will be a part of a distributed team from the Middle East and across Europe. We will only accept applicants that are within +-2 hours of CEST. Daily communication is done through Slack and we use Zoom extensively. We have multiple weekly meetings - not counting one-on-ones and project related interactions.

Our ideal candidate has the following characteristics

  • You have been coding for a while..
  • You can keep your head cool in stressful situations
  • You care about code quality, readability and maintainability
  • You take pride in what you do, but welcome feedback
  • Good communication skills and willingness to work in a team
  • Prior experience in working with Scrum and, or Agile methodologies
  • Perfect reading, writing and oral skills in English

We're looking forward to hear from you! Stay safe, and keep socially distancing.

Share this job:
Software Engineer Research
python cpp x86 keras scikit-learn machine learning May 20

Overview

Exciting opportunity to work on significantly advancing the state-of-the-art in cybersecurity tools! Our effort is addressing an outstanding software security hole, through novel applications of machine learning. 

Location : GrammaTech has offices in Ithaca, NY, Bethesda, MD, and Madison, WI — but will consider remote employees when there is a strong match of skills and remote work experience,outside the recent pandemic. (Remote employees MUST be located in the United States.)

Responsibilities

Under guidance of a principal investigator (PI), a software engineer on a research project will implement innovative prototypes to explore new approaches to problems in software security. A research-oriented software engineer is expected to:

  • Study and implement approaches drawn from academic literature or in-house design.
  • Evaluate the resulting prototype implementation to test its value in addressing the research goals.
  • Report results to the PI and respond by adapting the prototype to better address research goals.
  • Contribute to presentations and written reports that keep research sponsors up to date on project progress.
  • Prepare prototypes for demonstrations and evaluations by research sponsors.

Qualifications

Required

  • BS in Computer Science or equivalent. 
  • Three (3) years or more of industry experience.
  • Significant experience with applying machine learning techniques such as artificial neural networks, support vector machines, and cluster analysis to different problem domains.
  • Experience with implementing robust software using open source machine learning frameworks such as keras, scikit-learn, gensim. Such experience is expected to be in projects that are beyond standard coursework.
  • Ability to read advanced machine learning publications, and to judge and implement the key ideas.
  • Advanced software application development skills in Python and C++. 
  • Thorough understanding of data structures and algorithms.

Preferred

  • MS or PhD in Computer Science. 
  • Experience with machine code (x86, x64, ARM).
  • Knowledge of the cybersecurity domain.
  • Knowledge of containerization, orchestration, and deployment (docker, Kubernetes, AWS).

GrammaTech, Inc. is an Equal Opportunity/Affirmative Action employer. 
Members of underrepresented groups are encouraged to apply

Share this job:
Backend Engineer
go microservices rest distributed-computing kubernetes backend May 14

Carbon Relay is a world-class team focused on harnessing the power of machine learning to optimize Kubernetes. Our innovative platform allows organizations to boost application performance while keeping costs down. We recently completed a major fundraising round and are scaling up rapidly to turn our vision into reality. This position is perfect for someone who wants to get in on the ground floor at a startup that moves fast, tackles hard problems, and has fun!

We are looking for a Senior Software Engineer to spearhead the development of our backend applications. You will create a state-of-the-art backend to bridge the gap between the machine learning and Kubernetes teams. This includes defining and implementing efficient, robust and scalable APIs and services that meet current and emerging best practices.

Responsibilities

  • Developing our internal APIs and backend
  • Designing and implementing SaaS-based microservices
  • Collaborating with our infrastructure, machine learning and Kubernetes teams

Required qualifications

  • 10+ years experience in software engineering
  • Proficient in Go
  • Experience shipping and maintaining software products

Preferred qualifications

  • Experience with JavaScript
  • Experience with GCP/GKE
  • Familiarity with Kubernetes and containerization
  • Experience designing, building, and maintaining distributed systems

Why join Carbon Relay

  • Competitive salary plus equity
  • Health, dental, vision and life insurance
  • 401k with matching
  • Unlimited vacation policy (and we do really take vacations)
  • Ability to work remotely
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!
Share this job:
Backend Engineer
python security rust backend javascript machine learning May 10

Background

The future will be distributed. This is important because the centralized powers of today extract our data and sell it to the highest bidder, to our detriment. We can fix that. Federated protocols for messaging, microblogging, as well as distributed network architectures for payments, identity and organization are well underway. We are missing a consumer product that enables users to move away from data repositories centralized in the hands of big tech so they can safely and privately interact with digital technology. We need a new personal computing platform that enables this. And we need your help to build that platform.

Our organization is a values driven cooperative that prioritizes people and our planet over profit. We aim to have a global and positive impact on people’s lives by creating technology that works for people, instead of trying to extract data or money from them. We value open and open source software We look for strong values-driven people who share our vision. We look forward to telling you more about our vision, our values and what we are building.

Requirements

  • *Nix experience
  • CI/CD development
  • Ability to independently architect an application
  • Proficiency in multiple programming languages (e.g. javascript, python, C)
  • Ability to learn a new language quickly (e.g. rust)
  • Community participation and management
  • Security minded (ability to architect for security)
  • Analytic communication skills
  • [Optional] Ability to work remote
  • [Optional] Machine learning affinity

Role

We are looking for a fun and skilled full-time backend engineer to join our diverse and distributed team. We are bound by our common vision and strong values. You will help our team to design and develop the backend for our privacy-by-design data platform. We are looking at using a graph database in combination with security focussed language (e.g. rust) and possibly modules written in other languages (e.g. python, swift). You will join an existing organization of machine learning and distributed technology (e.g. blockchain) researchers. We are just transitioning from research to productizing our solution and look forward to working closely with you to see our common vision come alive. 

Values

We are strongly values driven. We believe that this provides the structure to scale our organization, innovate our technology and attract top talent as we strive to change the world towards a better future. These values apply to how we work and the philosophy of the solutions we create. 

What we offer

As an early organization we offer a base salary with a large stake in the future upside of our effort. Our leadership has successfully started and sold previous startups. We offer a creative, highly participatory environment without the classical top-down hierarchy. We value that each candidate brings their own unique mix of skills and experience. 

We have a preference to hire in Amsterdam or to offer relocation to Amsterdam. We believe your life outside your professional commitments should be enjoyed, and time to rejuvenate is critical to thriving: we offer 5 weeks paid holiday time.  

Share this job:
Solutions Architect
python data science machine learning api healthcare May 08
At Hyperscience, we use modern machine learning to turn documents into machine-readable data. Our customers receive a wide variety of documents, like life insurance applications, paystubs, utility bills, insurance claims, that must be processed quickly and accurately to better serve the people at these organizations, and their customers. Amazingly, this is all done manually today. We’re on a mission to change that! Our product is already delivering value to large, blue-chip organizations in financial services and insurance, and we see a massive opportunity to expand to more industries and automate more business processes. We are looking for people who are excited to help us build upon this foundation and vision.

While at first this may not seem like a priority for a machine learning company, we see a huge opportunity in great user experience. Machine learning itself is not a product. While it’s a fundamental piece of our technology, it still must live in an ecosystem and be used by people. Human-in-the-loop is a powerful feature that allows us to do things like process anything, provide feedback loops for generating accuracy reporting, and on-demand model re-training. Our product works with people, and for people.

We're looking to grow our Engineering Team with the addition of a Solutions Architect.

As a Solutions Architect You Will Be:

  • The technical point of contact for the Customer Experience team when deep technical questions or issues arise from installing or using our product.
  • Creatively troubleshooting and solving problems given details from customers even when the initial answer is unclear.
  • Managing around-the-world engineering resources to diagnose and fix high priority escalations across multiple time zones and locations, facilitating information flow between the Customer Experience team and engineering as necessary.
  • Participating in customer calls which require more technical expertise.
  • Involved in architecture discussions and decisions when implementing new product features.
  • Leveraging your systems knowledge to deliver fast and scalable software, starting from the design of the system through development and extension.
  • Helping improve our code quality by writing unit tests and performing thorough code reviews.
  • Designing easy-to-use programmer interfaces and tools that will be leveraged by other developers, including APIs for our clients' developers.

Desired Experience:

  • Degree in Computer Science or related engineering field, or equivalent practical experience.
  • Experience in building web-scale and/or enterprise-grade systems in different environments.
  • Strong ability to reason about data structures, complexity, and possible engineering approaches to a problem.
  • Experience with Python / Django is preferred, but experience with any mainstream language and framework is required.
  • Experience with distributed systems is a huge plus.
  • Experience with database systems, including SQL and/or NoSQL solutions is required.
  • Strong background in data science and mathematics is a plus.
  • Experience with customer service is a plus.
  • Experience with version control systems, preferably Git.
  • Experience troubleshooting remote systems in a customer-owned environment with limited access is a huge plus.

What You Will Achieve:

  • Within your first 30 days:
  • You will get acquainted and eventually be fully comfortable navigating the full codebase, the technology stack, the development processes and org structure within the company.
  • You will learn the product and will make your first significant, user-impacting contributions to one of our products.
  • You will get to know our ML domain, codebase, and practical applications.
  • You will begin to troubleshoot customer escalations as they arise.

  • Within your first quarter and beyond:
  • You will be an integral part of the team and a driven, focused self-starter who can navigate a certain amount of ambiguity, and who is not afraid to take a sizable chunk of functionality, analyze it, break it down, implement it and then assume ownership and responsibility over it.
  • You will be taking an active role in discussions about possible solutions, different approaches, API designs and more.
  • You will decide when to bring in additional engineering help as necessary to handle escalations as necessary and help facilitate information flow between CX and engineering.
  • You will contribute to shaping the way customer issues and requests are handled within Hyperscience.

Benefits & Perks

  • Top notch healthcare for you and your family
  • 30 days of paid leave annually to help nurture work-life symbiosis
  • A 100% 401(k) match for up to 6% of your annual salary
  • Stock Options
  • Paid gym membership
  • Pre-tax transportation and commuter benefits
  • 6 month parental leave (or double salary to pay for your partner's unpaid leave)
  • Free travel for any person accompanying a breastfeeding mother and her baby on a business trip
  • A child care and education stipend up to $3,000 per month, per child, under the age of 21 for a maximum of $6,000 per month total
  • Daily catered lunch, snacks, and drinks
  • Budget to attend conferences, train, and further your education
  • Relocation assistance
Hyperscience provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Hyperscience complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Share this job:

This Year

Software Architect
Numbrs  
aws kubernetes docker java apache-kafka machine learning Apr 28

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will work in the Architecture team to support the Head of Technology in all the activities of the Technology department. You will be responsible and accountable for the oversight of all aspects of engineering operations, the architecture and design of Numbrs platform, and the delivery of services and solutions within Technology.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • a minimum of 5 years experience architecting, developing, evolving and troubleshooting large scale distributed systems
  • hands-on experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Java
  • knowledge of AWS, Kubernetes, and Docker
  • leadership experience
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication and interpersonal skills

Ideally, candidates will also have

  • experience with systems for automating deployment, scaling, and management of containerised applications, such as Kubernetes and Mesos
  • experience with machine learning and big data technologies, such as Kafka, Storm, Flink and Cassandra
  • experience with encryption and cryptography standards

Location: Remote

Share this job:
Senior Data Scientist, Customer Analytics
 
senior machine learning Apr 22
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

We are looking for a top-tier data scientist to be an integral part of our newly formed customer analytics team in our Mountain View office. This candidate will play a key role in building customer behaviour models and deliver meaningful insights to improve customer experience and retention. He/She will partner closely with our machine learning team to build and optimize LTV and Customer Health Index (CHI) scoring models.
 
This is an outstanding opportunity for someone who has a proven record of partnering with internal stakeholders to bridge the gap between customer behaviors and cross channel customer views to propose retention, cultivation and win-back strategies.

On the first day, we'll expect you to have

  • 7+ years of experience in analytics or closely related fields
  • Expertise in SQL and proficiency in another data programming language (Python, R, etc.) 
  • Understanding of machine learning techniques such as Regression, ANOVA, Clustering, Decision Trees, Gradient Boosting Machines
  • A strong intuition for crafting raw data and analysis into well-written and persuasive content
  • Passionate about customer experience, and customer analytics as a craft
  • Great communication to share analyses and recommendations to non-technical audiences 
  • Experience in designing and analyzing experiments that drive key product decisions 
  • Strong sense of accountability, self-drive, and the ability to function independently 
  • Undergraduate in a technical subject from a top school

It's great, but not required, if you have

  • Experience in both enterprise (or B2B) as well as consumer (or B2C) environments 
  • Track record of presenting at meet-ups and conferences 
  • An advanced degree in a quantitative subject area, such as statistics, economics, mathematics, or computer science
More about our team

The Customer Success Analytics team at Atlassian is tasked to drive insights about our customers throughout their lifecycle with Atlassian. The team is responsible from defining and maintaining a customer health score, provide insights on customer churn and customer retention, build and manage models to predict potential churn and work cross-functionally to help customers get more value from our product suite, drive insights on the multi-year effort to transition on-prem customers to Cloud. This team partner very closely with other analytics teams
We are a highly collaborative team with a high bar of rigorous analyses, and we love to have fun along the way. 

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Data Scientist
python data science machine learning Apr 21
About Imperfect

Imperfect Foods was founded in 2015 with a mission to reduce food waste and build a better food system for everyone. We offer imperfect (yet delicious) produce, affordable pantry items, and quality meat and dairy. We deliver them conveniently to our customers’ doorsteps and pride ourselves on offering up to a 30% discount compared to grocery store prices. Our customers can get the healthy, seasonal produce they want alongside the grocery staples they rely on, without having to compromise their budget or values. We’re proving that doing the right thing for the planet doesn’t have to cost more, and that shopping for quality ingredients can support the people and resources that it takes to grow our favorite foods.

We're headquartered in San Francisco with operations all over the country. Check our website to see if there is an Imperfect near you!

We're looking for folks who are positive, motivated, and ready to change the world. If that sounds like you, drop us a line!

How we are protecting employees from COVID-19

At Imperfect Foods, employee health and safety is our top priority. We have implemented processes and precautions to prevent the spread of COVID-19 in our facilities. We provide gloves, masks, and hand sanitizer to all essential employees who must report to work. Before entering our warehouse, employees have their temperatures checked. In addition, we take great care to ensure frequently touched surfaces are sanitized throughout the day and all warehouses are fully sanitized weekly.

We have also implemented an Emergency Sick Leave policy providing full-time and part-time employees 2 additional weeks of paid time off and up to 26 weeks paid leave if they have a confirmed case of COVID-19.

About the Role:

Imperfect is looking for an experienced Data Scientist to join the Business Intelligence team. The role will develop and automate algorithms that integrate with various parts of the business to better our customer understanding, improve our customer experience, and drive operational efficiency. Our Data Scientist will collaborate with departments across the company, such as Marketing, Operations, and Engineering, for a wide and deep impact. Some example projects include demand forecasting to help with warehouse labor and inventory planning, optimizations to improve warehouse efficiency, personalization and recommendation algorithms to enable customers to discover relevant products, and product subscription cadence optimization. As an early member of the data science team, your role will influence the tech stack and frameworks we develop. This role requires a strong level of analytical horsepower and communication skills to effectively analyze and tell the story behind the data.

 If you like the idea of swimming in data, fighting food waste, and working with a bunch of pleasant people, come join us!

Responsibilities:

  • Build production grade models on large-scale datasets by utilizing advanced statistical modeling, machine learning, or data mining techniques
  • Provide data-driven analyses and insights for strategic and business initiatives, while maintaining analytics roadmap to prioritize initiatives, communicate timelines, and ensure successful, timely completion of deliverables
  • Collaborate with the teams across the company to identify impactful business problems and translate them into structured analyses, actionable insights, and reports and dashboards
  • Assist with the development and deployment of analytical tools and develop custom models to track key metrics, uncover insights in the data, and automate analyses
  • Contribute to code reviews and software development best practices
  • Effectively communicate with initiative stakeholders, including technical and non-technical audiences. Tell the story behind the data

Skills and Qualifications:

  • 3+ years of professional experience as a data scientist, including deploying and maintaining code in a production environment
  • Experience with machine learning techniques and advanced analytics (e.g. regression, classification, clustering, time series, econometrics, mathematical optimization)Advanced SQL and Python skills and running advanced analytics in a scripting language. Bonus: experience in R and other languages
  • A solid grasp of basic statistical applications and methods (experimentation, probabilities)
  • Experience working with large data sets and preparing them for analysis

About You:

  • You're able to clearly and effectively communicate the results of complex analyses and transform data into a story for different audiences and various stakeholders
  • You demonstrate intellectual curiosity, and a passion for translating information into actionable insights, with data big, small, structured, and messy
  • You have the insight to take ambiguous problems and solve them in a structured, hypothesis-driven, data-supported way
  • You're a self-starter with the ability to juggle multiple projects at once
  • You’re passionate about our mission to eliminate food waste and create a better food system for all

Details of the Position:

  • Full-time exempt position reporting to the Director of Business Intelligence
  • Candidate can be remotely located within the US
  • Salary and employee stock options commensurate with experienceCompetitive benefits package including health care, paid vacation, 401K, paid parental leave, and recurring credit towards your Imperfect account!

Physical Requirements:

  • Sedentary work; involves sitting most of the time
  • Occasional movement around the office may be necessary
  • Regular work with computers, including keyboards, mouses, and screens
  • Regular use of mobile devices, including smartphones and tablets
Individuals seeking employment at Imperfect Foods are considered without regard to race, color, religion, national origin, age, gender, marital status, ancestry, physical or mental disability, veteran status, or sexual orientation.

U.S. E-Verify Notice: Imperfect Foods participates in E-Verify in the United States. Imperfect will provide the U.S. Social Security Administration (SSA) and, if necessary, the U.S. Department of Homeland Security (DHS), with information from each new employee's Form I-9 to confirm work authorization.
Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops sysadmin Apr 21

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • background in Linux administration (mainly Debian)
  • scripting/programming knowledge of at least Unix shell scripting
  • good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • quick to learn and fast to adapt to changing environments
  • excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards
Share this job:
Senior software engineer
python react-js docker apache-kafka next.js senior Apr 20

We are looking for software engineers that love to build well designed systems that can last the test of time, make an impact in day to day operations and life of consumers and agents in the insurance technology space.

The Software Engineer will have the opportunity to engage in full stack development, managing k8s infrastructure, visualization, user analytics, distributed systems, docker container based services and machine learning algorithms.  You will be working with reactjs, python, git, webpack, k8s, variety of data stores, continuous integration, GCP, AWS, automated testing, docker and always open to speeding up our development process and scaling our system.  You will also be handling and managing a lot of data to inform the business operations. Your work will constantly be driven by end-user-focused development to enhance consumer and agent experience. You will utilize user engagement data and increase usability. You will also have ample opportunity to refactor, reimplement and drive innovation. In the end, you will have the final responsibility of delivering high quality software that is well tested for the end user’s needs (insurance agents, consumers, operators).

We respect people who continuously improve, learn and take pride in delivering software products that people would love to use. You will work closely with the design team, stakeholders to deliver scalable and well designed front end components and backend services. 

Qualifications

  • Proven experience in understanding complex user needs with multiple stakeholders and providing practical solutions that can work in production
  • Always automating solutions, but making it clearly understandable to other developers and users
  • Highly collaborative and be able to communicate both interpersonally and in your code (we <3 COMMENTS! and ability to think about future developers)
  • Not afraid to dive into other’s code, refactor and rewrite if it’s best for maintenance and end user
  • Eye for good software engineering practices (i.e. experience in dealing with bad code and improving or refactoring with good design)
  • Experience in designing great API’s
  • Experience with collaborating in github
  • Experience in solving SQL performance issues a plus
  • Experience in creating open source modules or participating in development and packaging of open source modules
  • Experience in agile sprints a plus
  • Experience in developing and debugging with javascript and reactjs a plus
  • Experience working effectively in remote teams
  • Desire to continuously, learn, improve and apply new technologies that will increase operational efficiency and effectiveness

We are in the early stage of building a remote team, and looking for someone who will fit in this role and is excited about making a huge impact in insurance tech, while working in a collegial, highly collaborative environment—replete with many of the latest communication tools (Slack, Hangouts, etc.)—with the flexibility of working from anywhere (we are distributed across Seattle, Boston and Europe) or in our office in Boston.

If you are excited to take this partake in the growth of EverQuote and take on this challenge please email at kwan+careers@everquote.com with links that demonstrate your work and a resume.

Share this job:
Data Scientist
Auth0  
python data science machine learning nlp docker aws Apr 17
Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

The Data Scientist will help build, scale and maintain the entire data science platform. The ideal candidate will have a deep technical understanding, hands-on experience in building Machine Learning models coming up with valuable insights, and promoting a data-driven culture across the organization. They would not hesitate to wrangle data, understand the business objectives and have a good understanding of the entire data stack.This position plays a key role in data initiatives, analytics projects, and influencing key stakeholders with critical business insights. You should be passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source Data Science technologies.

RESPONSIBILITIES

  • Use Python and the vast array of AI/ML libraries to analyze data and build statistical models to solve specific business problems.
  • Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters.
  • Collaborate with researchers, software developers, and business leaders to define product requirements and provide analytical support.
  • Directly contribute to the design and development of automated selection systems.
  • Build customer-facing reporting tools to provide insights and metrics which track system performance.
  • Communicate verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations.

BASIC QUALIFICATIONS

  • Bachelor's degree in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field.
  • Proficient with data analysis and modeling software such as Spark, R, Python etc.
  • Proficient with using scripting language such as Python and data manipulation/analysis libraries such as Scikit-learn and Pandas for analyzing and modeling data.
  • Experienced in using multiple data science methodologies to solve complex business problems.
  • Experienced in handling large data sets using SQL and databases in a business environment.
  • Excellent verbal and written communication.
  • Strong troubleshooting and problem-solving skills.
  • Thrive in a fast-paced, innovative environment.

PREFERRED QUALIFICATIONS

  • Graduated with a Master's degree or PhD in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field.
  • 2+ years’ experience as a Data Scientist.
  • Fluency in a scripting or computing language (e.g. Python)· Superior verbal and written communication skills with the ability to effectively advocate technical solutions to scientists, engineering teams, and business audiences.
  • Experienced in writing academic-styled papers for presenting both the methodologies used and results for data science projects.
  • Demonstrable track record of dealing well with ambiguity, ability to self-motivate, prioritizing needs, and delivering results in a dynamic environment.
  • Combination of deep technical skills and business savvy to interface with all levels and disciplines within our and our customer’s organizations.

SKILLS AND ABILITIES

  • At least 3 years of relevant work experienceAbility to write, analyze, and debug SQL queries.
  • Exceptional Problem-solving and analytical skills.
  • Fluent in implementing logistic regression, random forest, XGBoost, bayesian and ARIMA in Python/RFamiliarity or experience with A/B testing and associated frameworksFamiliarity with Sentiment Analysis (NLP) and LSTM AI modelsExperience in full AI/ML life-cycle from model development, training, deployment, testing, refining and iterating.
  • Experience with or willingness to learn tools such as Tableau, Apache SuperSet, Looker or similar BI tools.
  • Knowledge of AWS Redshift, Snowflake or similar databasesFamiliarity with tools such as Airflow and Docker are a plus

PREFERRED LOCATIONS

  • #AR; #US;
Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

Join us on this journey to make developers more productive while making the internet safer!
Share this job:
Senior Backend Engineer - Recommendations
Medium  
backend senior golang python machine learning aws Apr 16
Medium’s mission is to help people deepen their understanding of the world and discover ideas that matter. We are building a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers. We are creating the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas. To do that, we create simple and beautiful product experiences that prioritize the user experience.

We are looking for a Senior Backend Engineer that will work on building advanced recommendation systems that help users to navigate the vast library of quality content on Medium. As an engineer on the recommendations team, you'll work closely with PMs and ML engineers to create the best version of Medium for every user. 

What will you do

  • Work on a large-scale recommendation system that uses machine learning in its core.
  • Design and build scalable and performant backend services. (We use primarily Golang and Python).
  • Create data pipelines and high performance online serving infrastructure.
  • Design end to end experiments that optimize for reader satisfaction.

Who are you?

  • You have proven experience building server-side software.
  • You believe in the craft of software engineering, but are pragmatic with engineering tradeoffs.
  • You are passionate about using technology to help Medium readers discover the most relevant content.
  • You have familiar with services architecture and understand its trade-offs.
  • You have experience with AWS, Kafka, Redis, and relational database systems.

We'd particularly love it if

  • You have built services that serve a significant amount of traffic.
  • You are proficient in Golang, Python, and/or Spark.
  • You've worked on production machine learning systems at scale in search, ranking, recommendations, and/or natural language processing.
At Medium, we foster an inclusive, supportive, fun yet ambitious team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

Interested? We'd love to hear from you.
Share this job:
DevOps Engineer
devops python machine learning docker aws backend Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We’re building the next-generation infrastructure and security platform for CrowdStrike which includes an application and service delivery platform, massively scalable distributed data storage and replication systems, and a cutting-edge search and distributed graph system. Help us scale CrowdStrike’s infrastructure and products to handle massive growth in traffic and data as we continue to experience dramatic growth in the usage of our products.

We are looking for a skilled DevOps Engineer who can contribute to our team which is responsible for thousands of Splunk systems that run in AWS as well as in Crowdstrike data centers. We handle infrastructure, the Splunk backend and some application development, but the latter is handled mostly by a dedicated application development team. Splunk knowledge is an advantage, but if not, don’t worry, we will teach you.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Build automation tools for infrastructure using Python and various technologies including containers
  • Participate in, and at times lead, backend engineering efforts from rapid prototypes to large-scale applications using Python
  • Brainstorm, define, and build collaboratively with members across multiple team
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team
  • Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability.
  • Be an energetic "self-starter" with the ability to take ownership and be accountable for deliverables

Key Qualifications:

  • Linux systems administration, bash scripting
  • Configuration management automation using Chef, Puppet or Salt
  • Python scripting, automation through scripts

Nice to have skills:

  • Good knowledge of Splunk search processing language (SPL), reporting, dashboards, and search acceleration techniques
  • Experience administering Splunk, especially clusters and bundle replication, custom search commands, KV Store
  • Javascript
  • AWS Cloud
  • Container technologies such as Docker, Kubernetes
  • Experience programming with the Splunk REST API
  • Regular expressions
  • Good knowledge of Splunk data ingestion, field extraction and post-ingestion processing

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service event

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Senior Cloud Software Engineer
cloud senior golang java python scala Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role:

The Sr. Software Engineer role is part of the Engineering team from CrowdStrike Romania who will build globally distributed, fault tolerant and highly scalable cloud-based critical systems using Golang.

Don't worry if you don't know Golang, we will teach you!

If you are a hands-on engineer who loves to operate at scale, let's talk!

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Lead backend engineering efforts from rapid prototypes to large-scale application services across CrowdStrike products
  • Make it possible for internal teams to easily work with data at the petabyte scale
  • Leverage and build cloud based services to support our top rated security intelligence platform
  • Work with security researchers to troubleshoot time-sensitive production issues
  • Keep petabytes of critical business data safe, secure, and available
  • Brainstorm, define, and build collaboratively with members across multiple teams
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team
  • Be mentored and mentor other developers on web, backend and data storage technologies and our system
  • Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability
  • Be an energetic ‘self-starter’ with the ability to take ownership and be accountable for deliverables
  • Use and give back to the open source community

You'll use:

  • Golang
  • Python
  • Cassandra
  • Kafka
  • Elasticsearch
  • SQL
  • Redis
  • ZMQ
  • Hadoop
  • AWS Cloud
  • Git

What You’ll Need:

  • Bachelor's Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems)
  • Strong programming skills – Python / Java / Scala or Golang
  • The ability to design scalable and re-usable SOA services
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you
  • The desire to ship code and the love of seeing your bits run in production
  • Deep understanding of distributed systems and scalability challenges
  • Deep understand multi-threading, concurrency, and parallel processing technologies
  • Team player skills – we embrace collaborating as a team as much as possible
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration

Bonus Points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging)
  • Existing exposure to Golang, Scala, AWS, Cassandra, Kafka, Redis, Splunk
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

Bring your experience in distributed technologies and algorithms, your great API and systems design sensibilities, and your passion for writing code that performs at extreme scale. You will help build a platform that scales to millions of events per second and Terabytes of data per day. If you want a job that makes a difference in the world and operates at high scale, you’ve come to the right place.

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Big Data Engineer
big data python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Big Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote)

You will:

  • Write jobs using PySpark to process billions of events per day
  • Fine tune existing Hadoop / Spark clusters
  • Rewrite some existing PIG jobs in PySpark

Key Qualifications

You have:

  • BS degree in Computer Science or related field
  • 7+ years of relevant work experience
  • Experience in building data pipelines at scale (Note: We process over 1 Trillion events per week)
  • Good knowledge of Hadoop / Spark /Apache Kafka, Python, AWS, PySpark and other tools in the Big Data ecosystem
  • Good programming skills – Python
  • Operation experience in the tuning of clusters for optimal data processing
  • Experience in building out ETL jobs at scale
  • Good knowledge of distributed system design and associated tradeoffs
  • Good knowledge of CI / CD and associated best practices
  • Familiarity with Docker-based development and orchestration

Bonus points awarded if you have:

  • Created automated / scalable infrastructure and pipelines for teams in the past
  • Contributed to the open source community (GitHub, Stack Overflow, blogging)
  • Prior experience with Spinnaker, Relational DBs, or KV Stores
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Site Reliability Engineer
golang scala machine learning cloud aws testing Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

At CrowdStrike we operate a massive cloud platform that protects our customers from a variety of bad actors: cyber criminals, hacktivists and state sponsored attackers. We process tens of billions of events a day and we store and use petabytes of data. We’re looking for an engineer who is passionate about site reliability and is excited about joining us to ensure our service runs 24/7.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Be responsible for all operational aspects of our platform - Availability, Latency, Throughput, Monitoring, Issue Response (analysis, remediation, deployment) and Capacity Planning with respect to Latency and Throughput. Build tooling to help monitor and analyze the platform
  • Work in a team of highly motivated engineers
  • Use your passion for technology to ensure our platform operates flawlessly 24x7
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team. We don’t expect you to know all the technology we use but you will be able to get up to speed on new technology quickly
  • Have broad exposure to our entire architecture and become one of our experts in overall process flow
  • Be a great code reader and debugger, you will have to dive into large code bases, identify issues and remediate
  • Have an intrinsic drive to make things better
  • Bias towards small development projects and the occasional larger project
  • Use and give back to the open source community

You'll use:

  • Go(Golang)
  • Python
  • ElasticSearch
  • Cassandra
  • Kafka
  • Redis, Memcached
  • AWS Cloud

Key Qualifications:

You have:

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • Experience as a sustaining engineering or SRE for a cloud based product.
  • Good understanding of distributed systems and scalability challenges – sharding, partitioning, scaling horizontally are second nature to you.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • Good understand of multi-threading, concurrency, and parallel processing technologies.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Team player skills – we embrace collaborating as a team as much as possible.

Bonus points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging).
  • Existing exposure to Go, Kafka, AWS, Cassandra, Elasticsearch, Scala, Hadoop, Spark
  • Prior experience in the Cyber Security or intelligence fields

Benefits of working at CrowdStrike:

  • Background or familiarity with File Integrity Monitoring (FIM), Cloud Security Posture Management (CSPM), or Vulnerability Management.
  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service event

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Sr. Data Engineer
golang python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Sr. Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This role is open to candidates in Bucharest, (Office or Remote) Cluj, Brasov and Isai (Remote).

What You’ll Need

  • BS degree in Computer Science or related field.
  • 7+ years of relevant work experience.
  • Good knowledge of some (or all) of AWS, Python, Golang , Kafka , Spark, Airflow, ECS, Kubernetes, etc to build infrastructure that can ingest and analyze billions of events per day.
  • Good knowledge of distributed system design and associated tradeoffs.
  • Good knowledge of CI / CD and associated best practices.
  • Familiarity with Docker-based development and orchestration.

Bonus points if you have…

  • Created automated / scalable infrastructure and pipelines for teams in the past.
  • Contributed to the open source community (GitHub, Stack Overflow, blogging).
  • Prior experience with Spinnaker, Relational DBs, or KV Stores.
  • Prior experience in the cybersecurity or intelligence fields.

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits + 401k plan (US only)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and snacks
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Senior Backend Engineer
backend senior java python machine learning android Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

CrowdStrike Falcon Host is a two-component security product. One component is a “sensor”: a driver installed on client machines that observes system activity and recognizes malicious behavior, then provides on-box prevention capability and remote telemetry to the Falcon Host cloud. The sensor processes thousands of events per second to provide deep visibility into operations on the endpoint, and performs rich correlation and computation to identify malicious events and blocks malicious activity.

The cloud component aggregates sensor telemetry for each customer’s network, correlates malicious behavior across multiple machines, and presents our customers’ operations teams with a prioritized summary of the threats detected in their environments.

Join CrowdStrike and become a key leader building the most innovative endpoint security solution in the world. Our sensor development team is responsible for building the endpoint mobile sensor, deployed on multiple platforms including Android, and iOS. As Senior Software Engineer, you will be expected to make significant contributions to the design and implementation of major development projects. You will be required to identify solutions and collaborate with others to implement our features. You will work on stimulating problems borne out of the scale of our deployment and the stringent performance and security requirements of our sensors.

This position is open to candidates in Bucharest (Office and Remote), Brasov, Cluj and Isai (Remote).

Responsibilities

  • Provide key contribution on the sensor development team involved in architecture, in implementation, and improvements to next generation Anti-Virus and Enhanced Detection and Response Security Software.
  • Develop features from design to delivery including participation in product demo at the end of the sprint.
  • Collaborate with multi-functional teams in various locations.
  • Devise innovative solutions to hard performance or scale problems.
  • Maintain the sensor up-to-date with the latest OS developments and patches.

What You’ll Need

  • Experience designing and producing high quality software
  • Able to lead, mentor, communicate, collaborate, and work effectively in a distributed team.
  • Low-level OS knowledge in either Linux or Android or iOS
  • Extensive experience with C++, Java and System Development

Bonus Points

  • Strong background in scalable systems
  • Familiarity and experience with Agile process
  • OS system expertise for core concepts and subsystems
  • Familiarity with DevOps practices and technologies
  • Familiarity with Python and Bash

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Head of Engineering, Data Platform
 
machine learning cloud Apr 06
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian helps teams everywhere change the world, and we are looking for a high impact Head of Engineering to lead our Data Platform teams in R&D. We are on a mission to unleash the potential of every team, and data is a core component of how we achieve those ambitions. Every day millions of users collaborate within our products, visit our websites, and our data platforms provide the backbone on which we operate. We have a rare opportunity to work directly with global leadership to scale our existing capabilities, while building and executing the strategic and technical roadmaps for data systems across the company. This includes our data lakes, analytics systems, data platforms, and rapidly growing into other areas as we grow. You will provide technology leadership, align roadmaps to business outcomes, and lead a distributed team of highly skilled engineers.

On your first day, we'll expect you to have:

  • 10+ years of experience in Software Development
  • 5+ years of management experience and oversight for operations of data platforms at petabyte scale
  • 2+ years of experience in leading managers with global teams of 50+
  • Extensive experience on architecture design of large scale data platforms, machine learning, analytics, and data-drive applications
  • Proven experience attracting, retaining, and growing top engineering talent
  • Proven track record of shipping high-quality software on time
  • Experience dealing with challenging priorities in a fast-paced environment
  • Shown experience engaging and influencing senior executives and managing stakeholders

In this role you will:

  • Build a strategy and a roadmap of deliverables balancing short & long-term needs
  • Lead and grow a hardworking team of engineers in global locations
  • Engage closely with the team and provide technical leadership, strategic direction, and guidance to ensure timely delivery with high quality
  • Make build- vs-buy decisions for the software needed for your domain
  • Build a strong relationship with your customers, peers, and stakeholders
  • Be part of Atlassian’s senior technical leadership team providing insight and influence to our cloud products
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Software Engineer, Backend
Numbrs  
java backend microservices kubernetes machine-learning senior Mar 25

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

All candidates will have

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • experience with high volume production grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Senior Embedded Linux Developer with C/C++ -- 100% Remote, Flexible hours
c embedded linux senior cpp python Mar 23

Job Description

Analytics Fire builds custom software for the solar power industry. We’re looking for a senior embedded developer with deep experience developing, testing, and debugging embedded software in a Linux environment to help us expand our services to support manufacturers in clean energy and other high-tech sectors.

Analytics Fire is a small, distributed team working on a range of interesting projects. For example, we recently helped build a custom IoT platform for utility grade solar power plants, sales software for residential solar systems, and a machine-learning powered autonomous cleaning robot. We’re looking for a fun, reliable, and highly collaborative, senior developer to join our team.

This is a remote position. We’re flexible about location and hours, so long as your working hours are within a European or US-overlapping time zone. We’re a really great match for a senior developer who wants to work hard on interesting projects, while simultaneously having flexibility around time and geography.

Skills & Requirements

Ideally, you should have 5+ years developing, testing, and debugging embedded software in a Linux environment. You should be comfortable autonomously driving your own high quality / high velocity contributions using a range of technologies.

Required skills:

  • Expert level experience developing, testing and debugging embedded software in a Linux environment
  • Development experience using Python, C, and C++
  • Basic hardware experience (cabling, basic troubleshooting)
  • Basic understanding of web technology
  • Strong verbal and written communication skills

Nice to have:

  • Expert-level network systems experience using connman, BTLE, and dbus
  • Expert-level experience with wireless protocols (eg zigbee, cellular modem, etc)
  • Intermediate or expert level security engineering experience with current knowledge of security best practices, common exploits, and threat landscape
  • Previous experience creating custom Linux-based systems using Yocto

Analytics Fire has a very polyglot technical culture. Our ideal candidate has expert-level skills in the above categories, but also have a secondary skill set in one of the following areas:

  • Full stack software engineering with React, Angular
  • Scientific computing with C++ and/or Python
  • Computer vision / machine learning (PhD-level)
  • DevOps automation

About Analytics Fire

Analytics Fire was founded by a couple of engineering nerds -- one a PHd in machine learning and the other a former VP of Engineering for an analytics platform -- who both are also serial entrepreneurs backed by prominent angels and VCs including Y Combinator and 500 Startups.

“Our goal founding Analytics Fire, was to create the engineering department that we always dreamed of working for. For us this meant being able to spend our time working on the hardest and most interesting technical problems that we could find, being part of a small, tightly-knit team of world-class engineers, while simultaneously having flexibility around time and geography.”

Share this job:
Full Stack Engineer
full stack python ruby data science machine learning frontend Mar 18

About Triplebyte

Triplebyte is transforming the way software engineers are hired. Our mission is to build an open, valuable and skills-based credential for all engineers. This is important because millions of people have skills (and deserve good jobs), but don’t fit the profile that recruiters seek. Another way of saying this is that talent is uniformly distributed, but opportunity is not. Our goal is to broaden the distribution of opportunity.

To do this, we have built a background-blind technical assessment and interview process, and we use it to find engineers and help them get jobs at 450+ top companies. Our rich understanding of candidates’ skills and propriety machine learning models enable us to find the right match between our candidates and partner companies. This is why companies like Apple, Dropbox and American Express trust Triplebyte’s technical assessment to identify the best engineers for their open roles and reduce the time and effort it takes to hire them.

We just raised a $35 million Series B and our team of 65 is growing quickly! Now is a great time to join as we're on an exciting growth trajectory. You will have lots of opportunities for taking on responsibility and developing new skills quickly.

We're an experienced team, the founders have each built and sold companies before. Ammon and Guillaume founded Socialcam (acquired by Autodesk for $60 million) and Harj was the first partner hired at Y Combinator since its founding.

We are rapidly growing our engineering team, we are looking for generalist, full-stack, frontend, backend, machine learning, and dev-ops engineers!

Building the best product

The Triplebyte engineering team is still rather small, only 8 people. We all went through the Triplebyte process :) We move fast, release new features daily and iterate quickly. Triplebyte is growing very quickly and the engineering team is fully dedicated to supporting that growth, in any way we can. We are a generalist engineering team, we work on anything that helps the company or other teams grow. We cycle through backend, full-stack and frontend work based on the most critical needs. All of us are encouraged to work on all those parts.

Our frontend is mostly in React/Redux. Our backend is in Ruby on Rails, Postgres and Redis. (We also use python with Tensorflow for all our data science work)

It doesn't seem like it, but we have built a LOT of software. We are a truly full-stack company and we are building a process that needs to be perfect ends to ends. We have software for engineers, for interviewers, for writers, for companies, for us, etc.. If that's any indication of scale, we crossed recently the 200 tables in our Postgres database :)

Join us and help us build the best product! We value initiative, productivity, and ownership.

Compensation and Benefits

  • Competitive salary and stock options package
  • Open vacation policy
  • Employer paid health, vision and dental insurance
  • 401(k) plan with matching
  • Pre-tax commuter benefits
  • Daily catered lunches

Our Mission

We believe strongly in building a truly meritocratic, unbiased process for finding great talent. Even the best technology companies today still use where people went to college as a proxy for intelligence and ability. We're building a process that looks only at ability, not credentials, so we can have a future where everyone can focus on just learning and being good at what they do, not how they look on paper.

Every aspect of running a company has been improved over the last decade, except hiring. Most decisions are still made using amorphous terms like "gut feel" or "culture fit". They should be made using crisp data. Only a company specializing on this problem, using data collected from the hiring process at hundreds of companies, can solve it. That's the company we're building. Our mission is creating a scientific method for identifying great talent and intelligently routing it to the best place. Starting with software engineers.

The Company is an equal opportunity employer and makes employment decisions on the basis of merit and business needs. The Company does not discriminate against employees or applicants (in any aspect of employment, including, but not limited to recruiting and hiring, job assignment, compensation, opportunities for advancement, promotion, transfers, evaluation, benefits, training, discipline, and termination), on the basis of any characteristic protected under applicable federal, state, or local laws.

Share this job:
Senior Data Scientist / Backend Engineer
komoot  
aws python data-science machine-learning kotlin backend Mar 16

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and we are consistently ranked amongst the highest-grossing apps in both Google Play and the App Store.

To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services.

With over 9 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience. We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.

Your key responsibilities

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like pandas, numpy, Jupyter Notebooks, seaborn, scikit-learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Send us the following:

  • Your CV in English
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Backend Engineer, Data Processing Rust
backend java data science machine learning big data linux Mar 13
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the Role

This is a fully remote role, we will consider applicants based in North America, South America and Europe

Our Engineering team is having a blast while delivering the most sophisticated crypto-trading platform out there. Help us continue to define and lead the industry.

As part of Kraken's Backend Data team, you will work within a world-class team of engineers building Kraken's infrastructure using Rust. As a Backend Engineer in Data Processing, you help design and build Fraud and Security detection systems leveraging Big data pipelines, Machine Learning and Rust.

Responsibilities:

  • Design and implementation of micro-services in Rust
  • Writing reusable, testable, and efficient code
  • Implementation of risk evaluation and anti-fraud systems, or similar scoring and anomaly detection systems
  • Pick and design adequate data processing storage and pipelines
  • Work with our Fraud/Data Science team or provide the Data Science know-how to support Product requirements

Requirements:

  • At least 5 years of experience in software engineering
  • Experience with Rust
  • Experience writing network services or asynchronous code
  • Python, Java or similar work experience
  • Working knowledge using Kafka, Pulsar or similar
  • Experience using a Linux server environment
  • Ability to independently debug problems involving the network and operating system

A strong candidate will also:

  • Be familiar with deployment using Docker
  • Have previous work experience on Risk scoring or anomaly detection systems
  • Have experience with Machine Learning and its ecosystem
  • Have experience with other strongly typed programming languages
  • Have experience using SQL and distributed data solutions like Spark, Hadoop or Druid
  • Be passionate about secure, reliable and fast software
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops machine learning Mar 11

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

All candidates will have

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • Background in Linux administration (mainly Debian)
  • Scripting/programming knowledge of at least Unix shell scripting
  • Good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • Good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • Understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • Quick to learn and fast to adapt to changing environments
  • Excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • Excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards

Location: Zurich, Switzerland

Share this job:
Director of Sales Engineering - Central Europe
Dataiku  
executive python machine learning big data Mar 11
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is hiring a Director of Sales Engineering to oversee our Central Europe team of Sales Engineers. The position should be based in Frankfurt, Berlin or Munich. 

The Sales Engineering function at Dataiku is the primary technical function within the Sales organization, providing technical support (both for presales and post sales) to the Account Executives and directly contributing to Dataiku’s revenue objectives. As the “trusted advisors” in the sales process, the Sales Engineers help to build interest in Dataiku, build the solution to the prospect’s needs, and then build the evaluation to provide the prospect/customer with the proof that they need to make their purchasing decision. 

The Director role is key in growing Dataiku’s business in Central Europe ; s.he should work as an individual contributor and lead the team. S.he will support objectives related to our ability to deliver compelling, highly technical customer engagements in the field. Key responsibilities in the coming months will be the enablement of the existing team, the hiring and retaining of top talents and ensuring excellence in our execution. 

You’ll report directly to the Regional Vice President of Sales Engineering  for EMEA.

RESPONSIBILITIES:

  • Lead a team of Sales Engineers helping to ensure technical success throughout the sales process
  • Be the main technical point of contact for the VP sales Central Europe to strategize on opportunities, give reliable visibility to the pipe, and train / coach the sales team on technical topics
  • Mentor/coach team members during on-boarding and subsequent phases to ensure proper ramping of skills and capabilities
  • Mentor / coach team members on a day to day basis: brainstorm on the strategy to adopt for each opportunity, provide constructive feedbacksInteract with customers and prospects to understand their business challenges and engage in evaluation process
  • Build strong working relationships with cross functional teams to ensure alignment between pre and post sales activities
  • Work with cross functional teams, product management, R&D, and other organizations to ensure alignment, provide process and product feedback, and resolve critical customer situations

REQUIREMENTS

  • 5+ year’s experience in sales engineering of enterprise software products, big data tech experience preferred
  • 2+ year’s related Sales Engineering management experience is preferredExperience in complex / large-scale enterprise analytics deploymentFamiliarity with Python and/or R
  • Experience in data storage and computing infrastructure for data of all sizes (SQL, NoSQL, Hadoop, Spark, on-premise, and cloud)
  • Knowledge of machine learning libraries and techniques
  • Experience with visualization and dashboarding solutions
  • Excellent communication and public speaking skills
  • Native level in German and good communication skills in English
  • Ability to travel 10 to 40%
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Senior Python Back End Engineer
python django tdd rest aws senior Mar 10

Who we are: Loadsmart aims to move more with less. We combine great people and innovative technology to more efficiently move freight throughout North America. Our focus is on designing and building the best tools for our team and our customers, using machine learning models to connect freight with trucks. We automate with algorithms and scale with integrations to better match supply and demand. In doing this we reduce wasted fuel and lost time, cutting out empty miles for motor carriers and providing cost savings and instant booking for shippers. 

Who you are: You are curious to impact a 700 billion dollar industry of logistics. You take your impact seriously.  You are passionate about building solutions that create sustainable, resilient, and long-lasting value. You are a first-rate software engineer, with experience and a proven ability to think strategically, creatively, and programmatically to achieve product and business results. You are passionate about developing and you value quality and code maintainability. You are pragmatic and don't pick technologies without considering the benefits for the product and business.   

The role: We are looking for a Senior Python Back End Engineer to work remotely or in Florianopolis with Loadsmart.  You'll join us in obsessing about transformational technology as part of our backend team. You should have experience and a proven ability to develop new solutions, build new products, and maintain Python services in production.

The team: We are a growing team of passionate and experienced engineers. We build and maintain services written in Python and Go in AWS Lambda and Kubernetes, using Terraform to manage our infrastructure, Kinesis for event streaming, and several other technologies - we strive to use the best tool that suits our needs. We have public facing APIs as well as internal ones consumed by other teams, which get tested and documented. We have a collaborative work environment, where engineers and team members from different departments pool their knowledge together to help each other grow. 

Key Responsibilities:

    • Discuss good practices for software modularization and feel comfortable suggesting contributions to build our set of internal guidelines and shared code
    • Together with other team members, be responsible for the whole lifecycle of an application or service, from conception, specification, development, testing, support, bug fixing and decommission
    • Take part in the decision making of writing clean and simple architecture design systems and tools
    • Create tests, automation scripts, and configure Continuous Integration tools
    • Assist our less experienced engineers with their questions on Python's features and quirks
    • Collaborate with other team members to work on applications and services written in Python
    • Review Pull Requests from the other members of the team

Qualifications:

    • 5+ years of programming experience building and maintaining services, most of them in Python
    • Passion. Be passionate about creating clean, highly maintainable, and structured code, supported by unit and integration tests
    • Building blocks. Experience with architectural design patterns for services (client-server, event sourcing, MVC, etc)
    • Collaboration with Front End. Experience developing REST APIs as so you can also collaborate on its definition together with frontend developers
    • Troubleshooting. You're good at identifying the source of bugs, know how to dig into the code that may be causing the outstanding issue, and work to resolve any issues that come up
    • Communication. You’re very comfortable communicating in English (both written and spoken) - you will work in an international team with native and non native english speakers
    • Curiosity. You're keen on learning new technologies and tools as well as evaluating their pros and cons. You're a pragmatic programmer. You ask questions and are hungry to learn more
    • Extra points if you have written, deployed and maintained Go services in production
Share this job:
Software Engineer, Machine Learning
python javascript machine learning c html linux Feb 28
San Francisco, CA or Remote

Summary

The Scoring Platform team builds and maintains machine learning technologies to empower millions of users – readers, contributors, and donors – who contribute to Wikipedia and its sister projects on a daily basis.  We address process inefficiencies with machine learning technologies, we design and test new technology, we produce empirical insights, and we publish and present research on the intersection of technology and culture. We are strongly committed to principles of transparency, privacy and collaboration.  We use free and open source technology and we collaborate with researchers in the industry and academia. 

As a Software Engineer of the Scoring Platform team, you will help us build and scale our machine prediction service, train new machine learning models, and implement other data-intensive applications. You’ll travel to conferences to interact with researchers and volunteer contributors to Wikimedia Projects. You’ll help translate abstractions from current research and volunteers’ needs into concrete technologies and we’ll learn about the impacts of those technologies together.

You are responsible for:

  • Implementing the Scoring Platform’s data intensive infrastructure and APIs
  • Collaborating with researchers and product managers to bring forward foundational technologies for new products
  • Coordinating and communicating with other members of the Wikimedia engineering teams on relevant projects
  • Engaging with internal documentation efforts and inform other staff in various aspects of Scoring Platform infrastructure and services
  • Working in coordination with volunteer developers, editors, and researchers to understand their needs. 
  • Sharing our values, respect our code of conduct, adhere to our team norms, and work in accordance with all three

Skills and Experience:

  • Experience with web programming languages (PHP, Javascript, Python, etc.) 
  • Understanding or willingness to learn basic statistics, machine learning, and/or data analysis techniques.
  • Demonstrable experience developing and debugging web applications
  • Strong verbal and written proficiency with the English language
  • BS, MS, or PhD in Computer Science, Mathematics, Information science, or equivalent work experience

Qualities that are important to us:

  • Engineering mindset and strong experience in Linux ecosystems
  • Real world experience writing horizontally scalable web applications

Additionally, we’d love it if you have:

  • Experience participating in open source software projects and communities
  • Familiarity with scientific computing libraries in Python
  • Experience with web UI development (Javascript, HTML, CSS)
  • Experience working with collaborating in online spaces (chatrooms, web forums, etc.)

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible international workers' benefits are specific to their location and dependent on their employer of record

Share this job:
Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:
Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Data Scientist, Healthcare Policy Research
r python machine-learning healthcare data science machine learning Feb 19

We are looking for data scientists with policy research experience to perform data processing and analysis tasks, such as monitoring data quality, applying statistical and data science methods, and creating data visualizations. In this role you will work on multi-disciplinary teams supporting program evaluation and data analytics to inform policy and decision makers.

Responsibilities

  • Answering research questions or building solutions that involve linking health or healthcare data to other administrative data.
  • Designing, planning, and implementing the data science workflow on tasks and projects, involving descriptive statistics, machine learning or statistical analysis, data visualizations, and diagnostics using programming languages such as R or Python
  • Communicating results to collaborative project teams using data visualizations and presentations via tools such as notebooks (e.g. Jupyter) or interactive BI dashboards
  • Developing and maintaining documentation using Atlassian Confluence and Jira
  • Implementing quality assurance practices such as version control and testing

Requirements 

  • Master’s degree in Statistics, Data Science, Math, Computer Science, Social Science, or related field of study
  • Eight (8) years of experience 
  • Demonstrable enthusiasm for applying data science and statistics to social impact projects in academic, extra-curricular, and/or professional settings
  • Demonstrable skills in R or Python to manipulate data, conduct analyses, and create data visualizations
  • Ability to version code using GitExperience with healthcare claims and administrative data
  • Ability and desire to work independently as part of remote, interdisciplinary teams
  • Strong oral and written communication skills
Share this job:
Python Engineer
python cython tensorflow keras pytorch c Feb 17

Description

We are looking for a Python-focused software engineer to build and enhance our existing APIs and integrations with the Scientific Python ecosystem. TileDB’s Python API (https://github.com/TileDB-Inc/TileDB-Py) wraps the TileDB core C API, and integrates closely with NumPy to provide zero-copy data access. You will build and enhance the Python API through interfacing with the core library; build new integrations with data science, scientific, and machine learning libraries; and engage with the community and customers to create value through the use of TileDB.

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with TileDB, the TileDB-Py API and the TileDB-Dask integration. After 30 days, you will be fully integrated in our team. You’ll be an active contributor and maintainer of the TileDB-Py project, and ready to start designing and implementing new features, as well as engaging with the Python and Data Science community.

Requirements

  • 5+ years of experience as a software engineer
  • Expertise in Python and experience with NumPy
  • Experience interfacing with the CPython API, and Cython or pybind11
  • Experience with Python packaging, including binary distribution
  • Experience with C, C++, Rust, or a similar systems-level language
  • Distributed computation with Dask, Spark, or similar distributed computation system
  • Experience with a machine learning library (e.g. scikit-learn, TensorFlow, Keras, PyTorch, Theano)
  • Experience with Amazon Web Services or a similar cloud platform
  • Experience with dataframe-focused systems (e.g. Arrow, Pandas, data.frame, Vaex)
  • Experience with technical data formats such as (e.g. Parquet, HDF5, VCF, DICOM, GeoTIFF)
  • Experience with other technical computing systems (e.g. R, MATLAB, Julia)

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Healthcare Data Analyst
sql machine-learning python healthcare Feb 17

SemanticBits is looking for a Data Analyst with experience in Healthcare who is eager to use their domain knowledge and their skills in BI tools, SQL, and programming to rapidly turn data into insights.

Requirements:

  • Minimum of three years working in the healthcare industry (preferably with Medicare and/or Medicaid data) in a data analyst role.
  • Bachelor's degree with quantitative focus in Statistics, Operations Research, Computer Science or a related field and a minimum of six years of relevant experience.
  • Demonstrable expertise in basic statistics, SQL, and Python programming.
  • Strong understanding of relational database and data warehousing concepts (e.g. OLAP, dimensional modeling).
  • Minimum two years experience with BI tools, such as Tableau, MicroStrategy, Looker.
  • Strong technical communication skills; both written and verbal
  • Ability to understand and articulate the “big picture” and simplify complex ideas
  • Strong problem solving and structuring skills
  • Ability to identify and learn applicable new techniques independently as needed
Share this job:
Data Infrastructure Engineer
Tesorio  
data science machine learning finance Feb 14
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Infrastructure Engineer to work on our Data Science team.

Company Overview

Tesorio is a high-growth, early-stage startup backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights).

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo.
  • Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Paid Research Study for Developers with Machine Learning and AI Experience
machine-learning computer-vision artificial-intelligence machine learning Feb 14

User Research International is a research company based out of Redmond, Washington. Working with some of the biggest companies in the industry, we aim to improve your experience via paid research studies. Whether it be the latest video game or productivity tools, we value your feedback and experience. We are currently conducting a research study called ML/AI Decision Maker Study. We are looking for currently employed Full-Time Developers who are involved in purchasing data for machine learning purposes in their company. This study is a one-time Remote study via an online meeting. We’re offering $100 for remote and $175 for in-person participation in this study. Session lengths are 60-minutes. These studies provide a platform for our researchers to receive feedback for an existing or upcoming products or software. We have included the survey link for the study below. Taking the survey will help determine if you fit the profile requirements. Completing this survey does not guarantee you will be selected to participate.  If it's a match, we'll reach out with a formal confirmation and any additional details you may need.

I have summarized the study details below. In order to be considered, you must take the survey below. Thank you!

Study: ML/AI Decision Maker Study

Gratuity: $100 for Remote, $175 for In-Person

Session Length: 60-minutes

Location: Remote via Online Meeting or In-Person in Redmond, WA

Dates: Available dates are located within the survey

Survey: ML/AI Decision Maker Study (Qualification Survey)

Share this job:
Senior Software Engineer, Test Infrastructure
senior javascript data science machine learning docker testing Feb 13
About Labelbox
Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Andreessen Horowitz, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

As a Senior Software Engineer in Testing Infrastructure you will be responsible for building and maintaining our testing and automation infrastructure, test frameworks, tools, and documentation. At Labelbox engineers are responsible for writing automated tests for their features, and it will be your responsibility to build reliable infrastructure to support their efforts. 

Responsibilities

  • Design, implement and maintain reliable testing infrastructure for unit testing, component testing, integration testing, E2E API and UI testing, and load testing
  • Build and maintain reliable testing environments for our integration, E2E and load testing jobs
  • Integrate our testing infrastructure with our CI/CD pipeline to ensure automated kickoff of tests
  • Guide our engineering team on testing best practices and monitor the reliability and stability of our testing suite
  • When implementing new testing infrastructure and/or adopting new tools, write sample tests and documentation for our engineering team to hit the ground running with the new infrastructure

Requirements

  • 5+ years of experience developing testing infrastructure for web applications in a production environment
  • Experience with web technologies including: React, Redux, Javascript, Typescript, GraphQL, Node, REST, SQL
  • Experience with Unit Testing frameworks such as Jest, Mocha, and/or Jasmine
  • Experience with E2E UI test frameworks such as Cypress, Selenium, and/or Puppeteer
  • Experience writing E2E API tests with frameworks such as Cypress and/or Postman/Newman
  • Experience with Load Testing frameworks such as OctoPerf, JMeter, and/or Gatling
  • Experience integrating with CI/CD platforms and tools such as Codefresh, CircleCI, TravisCI, or Jenkins and Bazel
  • Experience integrating tools to measure code coverage across the different types of testing
  • Experience with Docker and Kubernetes
  • Experience with GraphQL and building testing infrastructure around it
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
Lead Software Engineer
python flask sqlalchemy rest kubernetes machine learning Feb 13

Carbon Relay is a world-class team focused on harnessing the power of machine learning to optimize Kubernetes. Our innovative platform allows organizations to boost application performance while keeping costs down. We recently completed a major fundraising round and are scaling up rapidly to turn our vision into reality. This position is perfect for someone who wants to get in on the ground floor at a startup that moves fast, tackles hard problems, and has fun!

We are looking for a Lead Software Engineer to spearhead the development of our backend applications. You will bridge the gap between the machine learning and Kubernetes teams to ensure that our products delight customers and scale efficiently.

Responsibilities

  • Developing our internal APIs and backend
  • Designing and implementing SaaS-based microservices
  • Collaborating with our infrastructure, machine learning and Kubernetes teams

Required qualifications

  • 10 + years experience in software engineering
  • Proficiency in Python
  • Experience shipping and maintaining software products

Preferred qualifications

  • Experience with JavaScript
  • Experience with GCP/GKE
  • Familiarity with Kubernetes and containerization

Why join Carbon Relay

  • Competitive salary plus equity
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Ability to work remotely
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!
Share this job:
Software Engineer, Backend
Fathom  
backend machine learning nlp testing healthcare Feb 12
We’re on a mission to understand and structure the world’s medical data, starting by making sense of the terabytes of clinician notes contained within the electronic health records of the world’s largest health systems.

We’re seeking exceptional Backend Engineers to work on data products that drive the core of our business--a backend expert able to unify data, and build systems that scale from both an operational and an organizational perspective.

Please note, this position has a minimum requirement of 3+ years of experience.  For earlier career candidates, we encourage you to apply to our SF and/or Toronto locations

As a Backend Engineer you will:

  • Develop data infrastructure to ingest, sanitize and normalize a broad range of medical data, such as electronics health records, journals, established medical ontologies, crowd-sourced labelling and other human inputs
  • Build performant and expressive interfaces to the data
  • Build infrastructure to help us not only scale up data ingest, but large-scale cloud-based machine learning

We’re looking for teammates who bring:

  • 3+ years of development experience in a company/production setting
  • Experience building data pipelines from disparate sources
  • Hands-on experience building and scaling up compute clusters
  • Excitement about learning how to build and support machine learning pipelines that scale not just computationally, but in ways that are flexible, iterative, and geared for collaboration
  • A solid understanding of databases and large-scale data processing frameworks like Hadoop or Spark.  You’ve not only worked with a variety of technologies, but know how to pick the right tool for the job
  • A unique combination of creative and analytic skills capable of designing a system capable of pulling together, training, and testing dozens of data sources under a unified ontology

Bonus points if you have experience with:

  • Developing systems to do or support machine learning, including experience working with NLP toolkits like Stanford CoreNLP, OpenNLP, and/or Python’s NLTK
  • Expertise with wrangling healthcare data and/or HIPAA
  • Experience with managing large-scale data labelling and acquisition, through tools such as through Amazon Turk or DeepDive

Share this job:
Remote React Developer Opportunity
Hays   $130K - $150K
react-js rest api machine learning cloud aws Feb 12
Remote React Developer Opportunity - Perm - Raleigh, NC - $130,000-$150,000

Hays Specialist Recruitment is working in partnership with LifeOmic to manage the recruitment of this position

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US are encouraged to apply.

Looking for a forward looking company focusing on the Cloud, Machine Learning and Mobile Devices? LifeOmic uses a Solid tech stack working with React in the frontend and Node on the backend on DynamoDB, working with service oriented API's, and all in AWS/Lambda. As for personal perks and benefits, they go on company retreats twice weekly and offer the opportunity to work remote 2 or 3 days per week as well. As for the extra financial perks, you'll have the chance to receive Equity in company and Flexible PTO.

Skills & Requirements
* Builder who can implement user interfaces against REST APIs and fit into a team who has embraced continuous delivery.
* Demonstrable experience with building modern web interfaces and incrementally improving the user experience.
* Collaborate in REST API design and convey how those impact the overall experience.
* Able to communicate complex concepts clearly and accurately.
* Able to iterate with new technologies and approaches as their respective open source communities push them forward.
* Bachelor's degree in CS
* 3+ years of demonstrable experience

Share this job:
Data Engineer
NAVIS  
hadoop web-services python sql etl machine learning Feb 11

NAVIS is excited to be hiring a Data Engineer for a remote, US-based positionCandidates based outside of the US are not being considered at this time.  This is a NEW position due to growth in this area. 

Be a critical element of what sets NAVIS apart from everyone else!  Join the power behind the best-in-class Hospitality CRM software and services that unifies hotel reservations and marketing teams around their guest data to drive more bookings and revenue.

Our Guest Experience Platform team is seeking an experienced Data Engineer to play a lead role in the building and running of our modern big data and machine learning platform that powers our products and services. In this role, you will responsible for building the analytical data pipeline, data lake, and real-time data streaming services.  You should be passionate about technology and complex big data business challenges.

You can have a huge impact on everything from the functionality we deliver for our clients, to the architecture of our systems, to the technologies that we are adopting. 

You should be highly curious with a passion for building things!

Click here for a peek inside our Engineering Team


DUTIES & RESPONSIBILITIES:

  • Design and develop business-critical data pipelines and related back-end services
  • Identification of and participation in simplifying and addressing scalability issues for enterprise level data pipeline
  • Design and build big data infrastructure to support our data lake

QUALIFICATIONS:

  • 2+ years of extensive experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, HBase, Parquet)
  • Experience with building, breaking, and fixing production data pipelines
  • Hands-on SQL skills and background in other data stores like SQL-Server, Postgres, and MongoDB
  • Experience with continuous delivery and automated deployments (Terraform)
  • ETL experience
  • Able to identify and participate in addressing scalability issues for enterprise level data
  • Python programming experience

DESIRED, BUT NOT REQUIRED SKILLS:

  • Experience with machine learning libraries like scikit-learn, Tensorflow, etc., or an interest in picking it up
  • Experience with R to mine structured and unstructured data and/or building statistical models
  • Experience with Elasticsearch
  • Experience with AWS services like Glue, S3, SQS, Lambda, Fargate, EC2, Athena, Kinesis, Step Functions, DynamoDB, CloudFormation and CloudWatch will be a huge plus

POSITION LOCATION:

There are 3 options for the location of this position (candidates based outside the US are NOT being considered at this time):

  • You can work remotely in the continental US with occasional travel to Bend, Oregon
  • You can be based at a shared office space in the heart of downtown Portland, Oregon
  • You can be based at our offices in Bend, Oregon (relocation assistance package available)

Check out this video to learn more about the Tech scene in Bend, Oregon


NAVIS OFFERS:

  • An inclusive, fun, values-driven company culture – we’ve won awards for it
  • A growing tech company in Bend, Oregon
  • Work / Life balance - what a concept!
  • Excellent benefits package with a Medical Expense Reimbursement Program that helps keep our medical deductibles LOW for our Team Members
  • 401(k) with generous matching component
  • Generous time off plus a VTO day to use working at your favorite charity
  • Competitive pay + annual bonus program
  • FREE TURKEYS (or pies) for every Team Member for Thanksgiving (hey, it's a tradition around here)
  • Your work makes a difference here, and we make a huge impact to our clients’ profits
  • Transparency – regular All-Team meetings, so you can stay in-the-know with what’s going on in all areas our business
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Don't see your role here?
data science machine learning computer vision healthcare Feb 03
Don't quite see the role you're looking for? Labelbox is growing incredibly fast and we are posting new roles frequently. Send us your resume so we can keep you in the loop as we grow.


About Labelbox

Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Gradient Ventures (Google’s AI-focused venture fund), Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:
Senior Python Developer
python-3.x mysql sqlalchemy graphql aws python Jan 30

We are seeking a senior software engineer with proven programming and analytic abilities. You would be a fundamental member of the team, focusing on building a solid foundation for the platform. We seek people who are excited and driven to grow with the experience of working alongside talented engineers.

Our team is remote, with most of our engineers right now either in New York, Argentina, or Colombia, with some folk in other parts of the Americas, as well as Europe.

You will work on developing new features for our apps, which may involve integrating with ecommerce platforms such as Shopify, Amazon, eBay, and Etsy. The integrations are used at scale.


About you:

- You understand that great things are accomplished when teams work together.

- You have lots of experience with Python, SQLAlchemy, Flask, and ideally GraphQL.

- You have some AWS experience.

- You can code review other team members work, provide assistance, and appreciate feedback.

- You take pride in your craft.

- You’ve learned from building systems and solutions the reasons to avoid technical debt, and how to approach and implement TDD and CI practices.

- You can craft elegant solutions when solving complex problems.

- You want to build something that is disrupting an entire industry.

- While hands on experience is not a requirement, you’re interested in learning how to apply machine learning and AI technologies and tools.

- You can handle a fast paced environment.

- You’ve made a lot of mistakes, and most importantly, have learned from them.

- You have 7+ years of experience developing software.

- You have worked remotely before.

About the role:

- Work on a cross-functional team including front end and UX to build solutions that are easy for customers to understand, work consistently and scale well.

- Review features and requirements and guide, design and implement solutions.

- Understand business requirements and think through solutions in terms of not just the coding implementation but also how the solution fits into the solution and how it solves a customer need.

- Ability to estimate effort and ship on agreed schedule. Comfortable pushing yourself and your team members when challenges pop up.

- Lead regular code reviews, with the goal of code quality, good design and approach along with pushing engineers to improve and evolve.

- Optimize existing tech stack and solutions, determine path to next step in the evolution.

- Learn, and push those around you to do the same - this is a craft that you’re constantly improving upon.

- Implement solutions that are pragmatic to get the platform built.

- Have the confidence to work with experienced and talented people to just build great things, you’re not a “rockstar”.

- Work with ShipHero leadership to implement practices and principles for the team.

Share this job:
Machine Learning Platform Engineer
Tesorio  
machine learning data science finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Machine Learning Platform Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo. Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Data Engineer
Tesorio  
python data science machine learning finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • Extract data from 3rd party databases and transform into useable outputs for the Product and Data Science teams
  • Work with Software Engineers and Machine Learning Engineers, call out risks, performance bottlenecks
  • Ensure data pipelines are robust, fast, secure and scalable
  • Use the right tool for the job to make data available, whether that is on the database or in code
  • Own data quality and pipeline uptime. Plan for failure

Required Skills

  • Experience scaling, securing, snapshotting, optimizing schemas and performance tuning relational and document data stores
  • Experience building ETL pipelines using workflow management tools like Argo, Airflow or Kubeflow on Kubernetes
  • Experience implementing data layer APIs using ORMs such as SQLAlchemy and schema change management using tools like Alembic
  • Fluent in Python and experience containerizing their code for deployment.
  • Experience following security best practices like encryption at rest and flight, data governance and cataloging
  • Understanding the importance of picking the right data store for the job. (columnar, logging, OLAP, OLTP etc.,)

Nice to Have Skills

  • Exposure to machine learning
  • Experience with on-prem deployments
Share this job:
Cloud Software Developer
.net-core cs nosql docker azure cloud Jan 24

We are building a brand-new Development Team.  You will be working within this team to build out high performing API’s, robust microservices, cloud native databases, backend algorithms and infrastructure in support of the company's vision for a supremely scalable, extensible and highly performing cloud-native solution.

What You’ll Need

  • Good analytical and problem-solving skills.
  • A positive and proactive attitude with strong initiative, team-working skills and the ability to learn quickly.
  • Good communication skills, with the ability to communicate in English in all forms.
  • An understanding of the principles behind great software design, allowing you to write code that’s clean, fast and scalable.
  • A good degree in Computer Science, Engineering or other numerate or semi-numerate discipline.
  • Extensive commercial experience of building and working with cloud-native or hybrid cloud solutions under either Azure, AWS or Google Cloud.
  • Strong hands-on experience with Microsoft .NET Core, using C#.
  • Experience of building solutions incorporating NoSQL Databases such as Redis, MongoDB, AWS DynamoDB or Azure Cosmos DB.
  • Well-practiced with Agile Development Methodology, working in short sprint cycles.
  • RESTful API development.
  • Git Source Control, in particular with GitHub or Azure DevOps Services.
  • Unit Testing Frameworks, such as MSTest or NUnit.
  • Experience of building cloud-native solutions with Microsoft Azure; particularly use of Azure Functions, Machine Learning, Table & Blob Storage, App Service, API Gateway, Azure Service Bus and Azure Kubernetes Service.
  • Working familiarity with microservices-based architectures and implementing design patterns such as CQRS.
  • Infrastructure as Code (Terraform).
  • Containerization Technology (Docker, Kubernetes, Nginx).
  • Working knowledge of CI/CD using TeamCity, Azure DevOps Services or similar tooling.

Web Development frameworks including React, Node.JS and Express.

In Return You’ll Receive

  • A greenfield opportunity to build a brand new, highly sophisticated cloud-native platform.
  • An opportunity to work with some of the most modern and leading-edge cloud-based technologies available; working closely with top experts in the industry.
  • Great start-up culture in a fun, friendly and hardworking team.
  • Flexible remote working
  • Competitive salary
  • Share options package - a rare opportunity to get in early and have a stake in what could potentially be a unicorn start-up, with a huge financial payback
  • Private healthcare insurance
  • 25 days of holiday + national holidays.
Share this job:
EDA Solutions Architect
Rescale  
machine learning cloud testing Jan 23
Rescale is the leader in enterprise big compute and is one of the fastest growing tech companies in Silicon Valley. Our customers range from disruptive and innovative startups to leading global automotive manufacturers. Our dynamic team is welcoming, collaborative and diverse. Becoming a part of the Rescale team means that you are part of the next generation in big compute and cloud HPC. You will become part of the disruption which is turning traditional HPC on its head.
 
We are looking to add Solutions Architect with a background in EDA (Electronic Design Automation) to our team in North America! As a Solutions Architect, you are responsible to lead and own the technical engagements. You work closely with prospects, customers and internal teams in a consultative technical role to help customers accelerate their HPC workloads in the cloud with the ScaleX platform. You play a critical role in the success of Rescale and enjoy the opportunity for personal and career growth.
 
Responsibilities:
 
●      Lead in coordinating and executing all technical activities throughout the customer pre-sales engagement with ISVs/partners in the semiconductor space (Cadence, Synopsis, Mentor, TSMC, Global Foundries), such as customer meetings, presentations, demonstrations, and proof of concepts.  Our most successful SAs lead and complete  2-3 POC’s for strategic customers a quarter in under 45 days.
●      Work independently to analyze technical needs, requirements, the customer’s current infrastructure, operations, and workflows. Our SA’s point of contact is usually a literal rocket scientist, CFD or FEA engineer, or aerospace engineer, where you will learn their technical workflow and help migrate their workflows to the cloud and show them how Rescale can make them run faster.
●      Gain a deep knowledge of customer’s workflows and HPC environments to provide a tailored solution with unique value prop. Our customers sit in the most innovative technologies in the world, from genome sequencing to aeronautical design to crash testing and more.
●      Work with customers to define and execute customers’ digital transformation strategy to migrate workloads from on-prem to the cloud. Our SA’s create strategic visions that span from 1 - 5 years.
●      Articulate and present Rescale solutions at conferences, user groups, webinars, etc. Our SA’s attend and present at 5-6 regional and national conferences throughout the year.
 
Key Qualifications:
 
●      B.S. in engineering, computer science, math, physics or equivalent, M.S. preferred.
●      Obsessed with providing the best experience and solutions to the customers.
●      Expertise in EDA workflows for chip design such as functional verification, physical design, physical verification, static timing analysis etc.
●      Experience working with enterprise customers in EDA and semiconductor vertical.
●      Enjoy solving difficult problems and strive to find the best solutions.
●      At least 2 years of software, hardware, or cloud experience in a technical role.
●      Great presenter and are able to present highly technical topics in an easy-to-understand manner.
●      Travel required.
 
Preferred Qualifications:
 
●      3 years of enterprise cloud, hardware, or software technical experience.
●      Understanding of the traditional enterprise sales process.
●      General knowledge in at least one of the high performance computing (HPC) disciplines (such as CFD, FEA, Molecular Dynamics, Weather Forecasting, Computational Chemistry, Reservoir (Seismic) Simulation, Media Rendering, Machine Learning, Financial etc).
●      Experience with enterprise customers in one or more of the industry verticals we serve including aerospace, automotive, life sciences, oil & gas, semiconductor, EDA, federal sector.
●      Experience with at least one of HPC simulation software (such as packages from ANSYS, Siemens, Dassault Systèmes, COMSOL, AVL, Altair, PTC, Cadence, Synopsys, Autodesk, OpenFOAM, LAMMPS. GROMACS, NAMD, etc).
●      Ability to manage multiple projects which are complex in nature and coordinate colleagues across diverse teams and locations.
●      Demonstrate understanding of HPC, scheduler, IaaS, scripting languages and how these tools are used and deployed by customers.
●      Flexibility and dedication to delivering value for customers.
Rescale is an Affirmative Action, Equal Opportunity Employer.  As part of our standard hiring process for new employees, employment with Rescale will be contingent upon successful completion of a comprehensive background check.   
Share this job:
Machine Learning Engineer/ Data Scientist
Acast  
machine learning python Jan 21
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

The Role
We are looking for a Senior Machine Learning Engineer/ Data Scientist to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as internal business critical use-cases. This team’s ambition is to transform our data into insights. You will contribute to designing, building, evaluating and refining ML products. These products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 
 
In this role you will work with other data engineers, product owners within a cross functional agile team.

You

  • have a minimum of two years of relevant experience
  • are comfortable writing Python (R or Scala)
  • are familiar with open Source ML models
  • have experience performing analysis with large datasets
  • are curious and can adapt quickly and enjoy a dynamic and ever-changing environment
  • are a good communicator and you can explain complex solutions to your peers as well as non-technical people

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our product and tech team is mostly located in central Stockholm, and this role is based in Stockholm, but with a remote first culture you are able to work remotely.

Do you want to be part of our ongoing journey? Apply now!

Share this job: