Remote Machine Learning Jobs

Last Week

Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:

This Month

Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Data Scientist, Healthcare Policy Research
r python machine-learning healthcare data science machine learning Feb 19

We are looking for data scientists with policy research experience to perform data processing and analysis tasks, such as monitoring data quality, applying statistical and data science methods, and creating data visualizations. In this role you will work on multi-disciplinary teams supporting program evaluation and data analytics to inform policy and decision makers.

Responsibilities

  • Answering research questions or building solutions that involve linking health or healthcare data to other administrative data.
  • Designing, planning, and implementing the data science workflow on tasks and projects, involving descriptive statistics, machine learning or statistical analysis, data visualizations, and diagnostics using programming languages such as R or Python
  • Communicating results to collaborative project teams using data visualizations and presentations via tools such as notebooks (e.g. Jupyter) or interactive BI dashboards
  • Developing and maintaining documentation using Atlassian Confluence and Jira
  • Implementing quality assurance practices such as version control and testing

Requirements 

  • Master’s degree in Statistics, Data Science, Math, Computer Science, Social Science, or related field of study
  • Eight (8) years of experience 
  • Demonstrable enthusiasm for applying data science and statistics to social impact projects in academic, extra-curricular, and/or professional settings
  • Demonstrable skills in R or Python to manipulate data, conduct analyses, and create data visualizations
  • Ability to version code using GitExperience with healthcare claims and administrative data
  • Ability and desire to work independently as part of remote, interdisciplinary teams
  • Strong oral and written communication skills
Share this job:
Python Engineer
python cython tensorflow keras pytorch c Feb 17

Description

We are looking for a Python-focused software engineer to build and enhance our existing APIs and integrations with the Scientific Python ecosystem. TileDB’s Python API (https://github.com/TileDB-Inc/TileDB-Py) wraps the TileDB core C API, and integrates closely with NumPy to provide zero-copy data access. You will build and enhance the Python API through interfacing with the core library; build new integrations with data science, scientific, and machine learning libraries; and engage with the community and customers to create value through the use of TileDB.

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with TileDB, the TileDB-Py API and the TileDB-Dask integration. After 30 days, you will be fully integrated in our team. You’ll be an active contributor and maintainer of the TileDB-Py project, and ready to start designing and implementing new features, as well as engaging with the Python and Data Science community.

Requirements

  • 5+ years of experience as a software engineer
  • Expertise in Python and experience with NumPy
  • Experience interfacing with the CPython API, and Cython or pybind11
  • Experience with Python packaging, including binary distribution
  • Experience with C, C++, Rust, or a similar systems-level language
  • Distributed computation with Dask, Spark, or similar distributed computation system
  • Experience with a machine learning library (e.g. scikit-learn, TensorFlow, Keras, PyTorch, Theano)
  • Experience with Amazon Web Services or a similar cloud platform
  • Experience with dataframe-focused systems (e.g. Arrow, Pandas, data.frame, Vaex)
  • Experience with technical data formats such as (e.g. Parquet, HDF5, VCF, DICOM, GeoTIFF)
  • Experience with other technical computing systems (e.g. R, MATLAB, Julia)

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Healthcare Data Analyst
sql machine-learning python healthcare Feb 17

SemanticBits is looking for a Data Analyst with experience in Healthcare who is eager to use their domain knowledge and their skills in BI tools, SQL, and programming to rapidly turn data into insights.

Requirements:

  • Minimum of three years working in the healthcare industry (preferably with Medicare and/or Medicaid data) in a data analyst role.
  • Bachelor's degree with quantitative focus in Statistics, Operations Research, Computer Science or a related field and a minimum of six years of relevant experience.
  • Demonstrable expertise in basic statistics, SQL, and Python programming.
  • Strong understanding of relational database and data warehousing concepts (e.g. OLAP, dimensional modeling).
  • Minimum two years experience with BI tools, such as Tableau, MicroStrategy, Looker.
  • Strong technical communication skills; both written and verbal
  • Ability to understand and articulate the “big picture” and simplify complex ideas
  • Strong problem solving and structuring skills
  • Ability to identify and learn applicable new techniques independently as needed
Share this job:
Data Infrastructure Engineer
Tesorio  
data science machine learning finance Feb 14
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Infrastructure Engineer to work on our Data Science team.

Company Overview

Tesorio is a high-growth, early-stage startup backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights).

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo.
  • Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Paid Research Study for Developers with Machine Learning and AI Experience
machine-learning computer-vision artificial-intelligence machine learning Feb 14

User Research International is a research company based out of Redmond, Washington. Working with some of the biggest companies in the industry, we aim to improve your experience via paid research studies. Whether it be the latest video game or productivity tools, we value your feedback and experience. We are currently conducting a research study called ML/AI Decision Maker Study. We are looking for currently employed Full-Time Developers who are involved in purchasing data for machine learning purposes in their company. This study is a one-time Remote study via an online meeting. We’re offering $100 for remote and $175 for in-person participation in this study. Session lengths are 60-minutes. These studies provide a platform for our researchers to receive feedback for an existing or upcoming products or software. We have included the survey link for the study below. Taking the survey will help determine if you fit the profile requirements. Completing this survey does not guarantee you will be selected to participate.  If it's a match, we'll reach out with a formal confirmation and any additional details you may need.

I have summarized the study details below. In order to be considered, you must take the survey below. Thank you!

Study: ML/AI Decision Maker Study

Gratuity: $100 for Remote, $175 for In-Person

Session Length: 60-minutes

Location: Remote via Online Meeting or In-Person in Redmond, WA

Dates: Available dates are located within the survey

Survey: ML/AI Decision Maker Study (Qualification Survey)

Share this job:
Senior Software Engineer, Test Infrastructure
senior javascript data science machine learning docker testing Feb 13
About Labelbox
Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Andreessen Horowitz, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

As a Senior Software Engineer in Testing Infrastructure you will be responsible for building and maintaining our testing and automation infrastructure, test frameworks, tools, and documentation. At Labelbox engineers are responsible for writing automated tests for their features, and it will be your responsibility to build reliable infrastructure to support their efforts. 

Responsibilities

  • Design, implement and maintain reliable testing infrastructure for unit testing, component testing, integration testing, E2E API and UI testing, and load testing
  • Build and maintain reliable testing environments for our integration, E2E and load testing jobs
  • Integrate our testing infrastructure with our CI/CD pipeline to ensure automated kickoff of tests
  • Guide our engineering team on testing best practices and monitor the reliability and stability of our testing suite
  • When implementing new testing infrastructure and/or adopting new tools, write sample tests and documentation for our engineering team to hit the ground running with the new infrastructure

Requirements

  • 5+ years of experience developing testing infrastructure for web applications in a production environment
  • Experience with web technologies including: React, Redux, Javascript, Typescript, GraphQL, Node, REST, SQL
  • Experience with Unit Testing frameworks such as Jest, Mocha, and/or Jasmine
  • Experience with E2E UI test frameworks such as Cypress, Selenium, and/or Puppeteer
  • Experience writing E2E API tests with frameworks such as Cypress and/or Postman/Newman
  • Experience with Load Testing frameworks such as OctoPerf, JMeter, and/or Gatling
  • Experience integrating with CI/CD platforms and tools such as Codefresh, CircleCI, TravisCI, or Jenkins and Bazel
  • Experience integrating tools to measure code coverage across the different types of testing
  • Experience with Docker and Kubernetes
  • Experience with GraphQL and building testing infrastructure around it
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
Lead Software Engineer
python flask sqlalchemy rest kubernetes machine learning Feb 13

Carbon Relay is a world-class team focused on harnessing the power of machine learning to optimize Kubernetes. Our innovative platform allows organizations to boost application performance while keeping costs down. We recently completed a major fundraising round and are scaling up rapidly to turn our vision into reality. This position is perfect for someone who wants to get in on the ground floor at a startup that moves fast, tackles hard problems, and has fun!

We are looking for a Lead Software Engineer to spearhead the development of our backend applications. You will bridge the gap between the machine learning and Kubernetes teams to ensure that our products delight customers and scale efficiently.

Responsibilities

  • Developing our internal APIs and backend
  • Designing and implementing SaaS-based microservices
  • Collaborating with our infrastructure, machine learning and Kubernetes teams

Required qualifications

  • 10 + years experience in software engineering
  • Proficiency in Python
  • Experience shipping and maintaining software products

Preferred qualifications

  • Experience with JavaScript
  • Experience with GCP/GKE
  • Familiarity with Kubernetes and containerization

Why join Carbon Relay

  • Competitive salary plus equity
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Ability to work remotely
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!
Share this job:
Software Engineer, Backend
Fathom  
backend machine learning nlp testing healthcare Feb 12
We’re on a mission to understand and structure the world’s medical data, starting by making sense of the terabytes of clinician notes contained within the electronic health records of the world’s largest health systems.

We’re seeking exceptional Backend Engineers to work on data products that drive the core of our business--a backend expert able to unify data, and build systems that scale from both an operational and an organizational perspective.

Please note, this position has a minimum requirement of 3+ years of experience.  For earlier career candidates, we encourage you to apply to our SF and/or Toronto locations

As a Backend Engineer you will:

  • Develop data infrastructure to ingest, sanitize and normalize a broad range of medical data, such as electronics health records, journals, established medical ontologies, crowd-sourced labelling and other human inputs
  • Build performant and expressive interfaces to the data
  • Build infrastructure to help us not only scale up data ingest, but large-scale cloud-based machine learning

We’re looking for teammates who bring:

  • 3+ years of development experience in a company/production setting
  • Experience building data pipelines from disparate sources
  • Hands-on experience building and scaling up compute clusters
  • Excitement about learning how to build and support machine learning pipelines that scale not just computationally, but in ways that are flexible, iterative, and geared for collaboration
  • A solid understanding of databases and large-scale data processing frameworks like Hadoop or Spark.  You’ve not only worked with a variety of technologies, but know how to pick the right tool for the job
  • A unique combination of creative and analytic skills capable of designing a system capable of pulling together, training, and testing dozens of data sources under a unified ontology

Bonus points if you have experience with:

  • Developing systems to do or support machine learning, including experience working with NLP toolkits like Stanford CoreNLP, OpenNLP, and/or Python’s NLTK
  • Expertise with wrangling healthcare data and/or HIPAA
  • Experience with managing large-scale data labelling and acquisition, through tools such as through Amazon Turk or DeepDive

Share this job:
Remote React Developer Opportunity
Hays   $130K - $150K
react-js rest api machine learning cloud aws Feb 12
Remote React Developer Opportunity - Perm - Raleigh, NC - $130,000-$150,000

Hays Specialist Recruitment is working in partnership with LifeOmic to manage the recruitment of this position

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US are encouraged to apply.

Looking for a forward looking company focusing on the Cloud, Machine Learning and Mobile Devices? LifeOmic uses a Solid tech stack working with React in the frontend and Node on the backend on DynamoDB, working with service oriented API's, and all in AWS/Lambda. As for personal perks and benefits, they go on company retreats twice weekly and offer the opportunity to work remote 2 or 3 days per week as well. As for the extra financial perks, you'll have the chance to receive Equity in company and Flexible PTO.

Skills & Requirements
* Builder who can implement user interfaces against REST APIs and fit into a team who has embraced continuous delivery.
* Demonstrable experience with building modern web interfaces and incrementally improving the user experience.
* Collaborate in REST API design and convey how those impact the overall experience.
* Able to communicate complex concepts clearly and accurately.
* Able to iterate with new technologies and approaches as their respective open source communities push them forward.
* Bachelor's degree in CS
* 3+ years of demonstrable experience

Share this job:
Data Engineer
NAVIS  
hadoop web-services python sql etl machine learning Feb 11

NAVIS is excited to be hiring a Data Engineer for a remote, US-based positionCandidates based outside of the US are not being considered at this time.  This is a NEW position due to growth in this area. 

Be a critical element of what sets NAVIS apart from everyone else!  Join the power behind the best-in-class Hospitality CRM software and services that unifies hotel reservations and marketing teams around their guest data to drive more bookings and revenue.

Our Guest Experience Platform team is seeking an experienced Data Engineer to play a lead role in the building and running of our modern big data and machine learning platform that powers our products and services. In this role, you will responsible for building the analytical data pipeline, data lake, and real-time data streaming services.  You should be passionate about technology and complex big data business challenges.

You can have a huge impact on everything from the functionality we deliver for our clients, to the architecture of our systems, to the technologies that we are adopting. 

You should be highly curious with a passion for building things!

Click here for a peek inside our Engineering Team


DUTIES & RESPONSIBILITIES:

  • Design and develop business-critical data pipelines and related back-end services
  • Identification of and participation in simplifying and addressing scalability issues for enterprise level data pipeline
  • Design and build big data infrastructure to support our data lake

QUALIFICATIONS:

  • 2+ years of extensive experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, HBase, Parquet)
  • Experience with building, breaking, and fixing production data pipelines
  • Hands-on SQL skills and background in other data stores like SQL-Server, Postgres, and MongoDB
  • Experience with continuous delivery and automated deployments (Terraform)
  • ETL experience
  • Able to identify and participate in addressing scalability issues for enterprise level data
  • Python programming experience

DESIRED, BUT NOT REQUIRED SKILLS:

  • Experience with machine learning libraries like scikit-learn, Tensorflow, etc., or an interest in picking it up
  • Experience with R to mine structured and unstructured data and/or building statistical models
  • Experience with Elasticsearch
  • Experience with AWS services like Glue, S3, SQS, Lambda, Fargate, EC2, Athena, Kinesis, Step Functions, DynamoDB, CloudFormation and CloudWatch will be a huge plus

POSITION LOCATION:

There are 3 options for the location of this position (candidates based outside the US are NOT being considered at this time):

  • You can work remotely in the continental US with occasional travel to Bend, Oregon
  • You can be based at a shared office space in the heart of downtown Portland, Oregon
  • You can be based at our offices in Bend, Oregon (relocation assistance package available)

Check out this video to learn more about the Tech scene in Bend, Oregon


NAVIS OFFERS:

  • An inclusive, fun, values-driven company culture – we’ve won awards for it
  • A growing tech company in Bend, Oregon
  • Work / Life balance - what a concept!
  • Excellent benefits package with a Medical Expense Reimbursement Program that helps keep our medical deductibles LOW for our Team Members
  • 401(k) with generous matching component
  • Generous time off plus a VTO day to use working at your favorite charity
  • Competitive pay + annual bonus program
  • FREE TURKEYS (or pies) for every Team Member for Thanksgiving (hey, it's a tradition around here)
  • Your work makes a difference here, and we make a huge impact to our clients’ profits
  • Transparency – regular All-Team meetings, so you can stay in-the-know with what’s going on in all areas our business
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Don't see your role here?
data science machine learning computer vision healthcare Feb 03
Don't quite see the role you're looking for? Labelbox is growing incredibly fast and we are posting new roles frequently. Send us your resume so we can keep you in the loop as we grow.


About Labelbox

Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Gradient Ventures (Google’s AI-focused venture fund), Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:
Senior Python Developer
python-3.x mysql sqlalchemy graphql aws python Jan 30

We are seeking a senior software engineer with proven programming and analytic abilities. You would be a fundamental member of the team, focusing on building a solid foundation for the platform. We seek people who are excited and driven to grow with the experience of working alongside talented engineers.

Our team is remote, with most of our engineers right now either in New York, Argentina, or Colombia, with some folk in other parts of the Americas, as well as Europe.

You will work on developing new features for our apps, which may involve integrating with ecommerce platforms such as Shopify, Amazon, eBay, and Etsy. The integrations are used at scale.


About you:

- You understand that great things are accomplished when teams work together.

- You have lots of experience with Python, SQLAlchemy, Flask, and ideally GraphQL.

- You have some AWS experience.

- You can code review other team members work, provide assistance, and appreciate feedback.

- You take pride in your craft.

- You’ve learned from building systems and solutions the reasons to avoid technical debt, and how to approach and implement TDD and CI practices.

- You can craft elegant solutions when solving complex problems.

- You want to build something that is disrupting an entire industry.

- While hands on experience is not a requirement, you’re interested in learning how to apply machine learning and AI technologies and tools.

- You can handle a fast paced environment.

- You’ve made a lot of mistakes, and most importantly, have learned from them.

- You have 7+ years of experience developing software.

- You have worked remotely before.

About the role:

- Work on a cross-functional team including front end and UX to build solutions that are easy for customers to understand, work consistently and scale well.

- Review features and requirements and guide, design and implement solutions.

- Understand business requirements and think through solutions in terms of not just the coding implementation but also how the solution fits into the solution and how it solves a customer need.

- Ability to estimate effort and ship on agreed schedule. Comfortable pushing yourself and your team members when challenges pop up.

- Lead regular code reviews, with the goal of code quality, good design and approach along with pushing engineers to improve and evolve.

- Optimize existing tech stack and solutions, determine path to next step in the evolution.

- Learn, and push those around you to do the same - this is a craft that you’re constantly improving upon.

- Implement solutions that are pragmatic to get the platform built.

- Have the confidence to work with experienced and talented people to just build great things, you’re not a “rockstar”.

- Work with ShipHero leadership to implement practices and principles for the team.

Share this job:
Machine Learning Platform Engineer
Tesorio  
machine learning data science finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Machine Learning Platform Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo. Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Data Engineer
Tesorio  
python data science machine learning finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • Extract data from 3rd party databases and transform into useable outputs for the Product and Data Science teams
  • Work with Software Engineers and Machine Learning Engineers, call out risks, performance bottlenecks
  • Ensure data pipelines are robust, fast, secure and scalable
  • Use the right tool for the job to make data available, whether that is on the database or in code
  • Own data quality and pipeline uptime. Plan for failure

Required Skills

  • Experience scaling, securing, snapshotting, optimizing schemas and performance tuning relational and document data stores
  • Experience building ETL pipelines using workflow management tools like Argo, Airflow or Kubeflow on Kubernetes
  • Experience implementing data layer APIs using ORMs such as SQLAlchemy and schema change management using tools like Alembic
  • Fluent in Python and experience containerizing their code for deployment.
  • Experience following security best practices like encryption at rest and flight, data governance and cataloging
  • Understanding the importance of picking the right data store for the job. (columnar, logging, OLAP, OLTP etc.,)

Nice to Have Skills

  • Exposure to machine learning
  • Experience with on-prem deployments
Share this job:

This Year

Cloud Software Developer
.net-core cs nosql docker azure cloud Jan 24

We are building a brand-new Development Team.  You will be working within this team to build out high performing API’s, robust microservices, cloud native databases, backend algorithms and infrastructure in support of the company's vision for a supremely scalable, extensible and highly performing cloud-native solution.

What You’ll Need

  • Good analytical and problem-solving skills.
  • A positive and proactive attitude with strong initiative, team-working skills and the ability to learn quickly.
  • Good communication skills, with the ability to communicate in English in all forms.
  • An understanding of the principles behind great software design, allowing you to write code that’s clean, fast and scalable.
  • A good degree in Computer Science, Engineering or other numerate or semi-numerate discipline.
  • Extensive commercial experience of building and working with cloud-native or hybrid cloud solutions under either Azure, AWS or Google Cloud.
  • Strong hands-on experience with Microsoft .NET Core, using C#.
  • Experience of building solutions incorporating NoSQL Databases such as Redis, MongoDB, AWS DynamoDB or Azure Cosmos DB.
  • Well-practiced with Agile Development Methodology, working in short sprint cycles.
  • RESTful API development.
  • Git Source Control, in particular with GitHub or Azure DevOps Services.
  • Unit Testing Frameworks, such as MSTest or NUnit.
  • Experience of building cloud-native solutions with Microsoft Azure; particularly use of Azure Functions, Machine Learning, Table & Blob Storage, App Service, API Gateway, Azure Service Bus and Azure Kubernetes Service.
  • Working familiarity with microservices-based architectures and implementing design patterns such as CQRS.
  • Infrastructure as Code (Terraform).
  • Containerization Technology (Docker, Kubernetes, Nginx).
  • Working knowledge of CI/CD using TeamCity, Azure DevOps Services or similar tooling.

Web Development frameworks including React, Node.JS and Express.

In Return You’ll Receive

  • A greenfield opportunity to build a brand new, highly sophisticated cloud-native platform.
  • An opportunity to work with some of the most modern and leading-edge cloud-based technologies available; working closely with top experts in the industry.
  • Great start-up culture in a fun, friendly and hardworking team.
  • Flexible remote working
  • Competitive salary
  • Share options package - a rare opportunity to get in early and have a stake in what could potentially be a unicorn start-up, with a huge financial payback
  • Private healthcare insurance
  • 25 days of holiday + national holidays.
Share this job:
EDA Solutions Architect
Rescale  
machine learning cloud testing Jan 23
Rescale is the leader in enterprise big compute and is one of the fastest growing tech companies in Silicon Valley. Our customers range from disruptive and innovative startups to leading global automotive manufacturers. Our dynamic team is welcoming, collaborative and diverse. Becoming a part of the Rescale team means that you are part of the next generation in big compute and cloud HPC. You will become part of the disruption which is turning traditional HPC on its head.
 
We are looking to add Solutions Architect with a background in EDA (Electronic Design Automation) to our team in North America! As a Solutions Architect, you are responsible to lead and own the technical engagements. You work closely with prospects, customers and internal teams in a consultative technical role to help customers accelerate their HPC workloads in the cloud with the ScaleX platform. You play a critical role in the success of Rescale and enjoy the opportunity for personal and career growth.
 
Responsibilities:
 
●      Lead in coordinating and executing all technical activities throughout the customer pre-sales engagement with ISVs/partners in the semiconductor space (Cadence, Synopsis, Mentor, TSMC, Global Foundries), such as customer meetings, presentations, demonstrations, and proof of concepts.  Our most successful SAs lead and complete  2-3 POC’s for strategic customers a quarter in under 45 days.
●      Work independently to analyze technical needs, requirements, the customer’s current infrastructure, operations, and workflows. Our SA’s point of contact is usually a literal rocket scientist, CFD or FEA engineer, or aerospace engineer, where you will learn their technical workflow and help migrate their workflows to the cloud and show them how Rescale can make them run faster.
●      Gain a deep knowledge of customer’s workflows and HPC environments to provide a tailored solution with unique value prop. Our customers sit in the most innovative technologies in the world, from genome sequencing to aeronautical design to crash testing and more.
●      Work with customers to define and execute customers’ digital transformation strategy to migrate workloads from on-prem to the cloud. Our SA’s create strategic visions that span from 1 - 5 years.
●      Articulate and present Rescale solutions at conferences, user groups, webinars, etc. Our SA’s attend and present at 5-6 regional and national conferences throughout the year.
 
Key Qualifications:
 
●      B.S. in engineering, computer science, math, physics or equivalent, M.S. preferred.
●      Obsessed with providing the best experience and solutions to the customers.
●      Expertise in EDA workflows for chip design such as functional verification, physical design, physical verification, static timing analysis etc.
●      Experience working with enterprise customers in EDA and semiconductor vertical.
●      Enjoy solving difficult problems and strive to find the best solutions.
●      At least 2 years of software, hardware, or cloud experience in a technical role.
●      Great presenter and are able to present highly technical topics in an easy-to-understand manner.
●      Travel required.
 
Preferred Qualifications:
 
●      3 years of enterprise cloud, hardware, or software technical experience.
●      Understanding of the traditional enterprise sales process.
●      General knowledge in at least one of the high performance computing (HPC) disciplines (such as CFD, FEA, Molecular Dynamics, Weather Forecasting, Computational Chemistry, Reservoir (Seismic) Simulation, Media Rendering, Machine Learning, Financial etc).
●      Experience with enterprise customers in one or more of the industry verticals we serve including aerospace, automotive, life sciences, oil & gas, semiconductor, EDA, federal sector.
●      Experience with at least one of HPC simulation software (such as packages from ANSYS, Siemens, Dassault Systèmes, COMSOL, AVL, Altair, PTC, Cadence, Synopsys, Autodesk, OpenFOAM, LAMMPS. GROMACS, NAMD, etc).
●      Ability to manage multiple projects which are complex in nature and coordinate colleagues across diverse teams and locations.
●      Demonstrate understanding of HPC, scheduler, IaaS, scripting languages and how these tools are used and deployed by customers.
●      Flexibility and dedication to delivering value for customers.
Rescale is an Affirmative Action, Equal Opportunity Employer.  As part of our standard hiring process for new employees, employment with Rescale will be contingent upon successful completion of a comprehensive background check.   
Share this job:
Machine Learning Engineer/ Data Scientist
Acast  
machine learning python Jan 21
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

The Role
We are looking for a Senior Machine Learning Engineer/ Data Scientist to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as internal business critical use-cases. This team’s ambition is to transform our data into insights. You will contribute to designing, building, evaluating and refining ML products. These products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 
 
In this role you will work with other data engineers, product owners within a cross functional agile team.

You

  • have a minimum of two years of relevant experience
  • are comfortable writing Python (R or Scala)
  • are familiar with open Source ML models
  • have experience performing analysis with large datasets
  • are curious and can adapt quickly and enjoy a dynamic and ever-changing environment
  • are a good communicator and you can explain complex solutions to your peers as well as non-technical people

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our product and tech team is mostly located in central Stockholm, and this role is based in Stockholm, but with a remote first culture you are able to work remotely.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Federal Solutions Architect
Rescale  
machine learning cloud testing Jan 20
Rescale is the leader in enterprise big compute and is one of the fastest growing tech companies in Silicon Valley. Our customers range from disruptive and innovative startups to leading global automotive manufacturers. Our dynamic team is welcoming, collaborative and diverse. Becoming a part of the Rescale team means that you are part of the next generation in big compute and cloud HPC. You will become part of the disruption which is turning traditional HPC on its head.
 
We are looking to add a Solutions Architect to our team in North America! As a Solutions Architect, you are responsible to lead and own the technical engagements. You work closely with prospects, customers and internal teams in a consultative technical role to help customers accelerate their HPC workloads in the cloud with the ScaleX platform. You play a critical role in the success of Rescale and enjoy the opportunity for personal and career growth.
 
Responsibilities:
 
●      Lead in coordinating and executing all technical activities throughout the customer pre-sales engagement, such as customer meetings, presentations, demonstrations, and proof of concepts.  Our most successful SAs lead and complete  2-3 POC’s for strategic customers a quarter in under 45 days.
●      Work independently to analyze technical needs, requirements, the customer’s current infrastructure, operations, and workflows. Our SA’s point of contact is usually a literal rocket scientist, CFD or FEA engineer, or aerospace engineer, where you will learn their technical workflow and help migrate their workflows to the cloud and show them how Rescale can make them run faster.
●      Gain a deep knowledge of customer’s workflows and HPC environments to provide a tailored solution with unique value prop. Our customers sit in the most innovative technologies in the world, from genome sequencing to aeronautical design to crash testing and more.
●      Work with customers to define and execute customers’ digital transformation strategy to migrate workloads from on-prem to the cloud. Our SA’s create strategic visions than span from 1 - 5 years.
●      Articulate and present Rescale solutions at conferences, user groups, webinars, etc. Our SA’s attend and present at 5-6 regional and national conferences throughout the year.
 

Key Qualifications:
 
●      B.S. in engineering, computer science, math, physics or equivalent, M.S. preferred.
●      Track record of technical engagements with the Department of Defense, all branches of the US Armed Forces, or Federal Systems Integrators (FSIs) such as SAIC, Leidos, GD.
●      Obsessed with providing the best experience and solutions to the customers.
●      Enjoy solving difficult problems and strive to find the best solutions.
●      At least 2 years of software, hardware, or cloud experience in a technical role.
●      Great presenter and are able to present highly technical topics in an easy-to-understand manner.
●      Must be a US Citizen.
●      Travel required.
 
Preferred Qualifications:
 
●      3 years of cloud, hardware, or software technical experience, with preference given to experience using, architecting, or deploying any of these for public sector customers.
●      Understanding of the public sector and traditional enterprise sales processes.
●      General knowledge in at least one of the following high performance computing (HPC) disciplines (such as CFD, FEA, Molecular Dynamics, Weather Forecasting, Computational Chemistry, Media Rendering, Machine Learning/AI,l etc).
●      Experience with one or more HPC simulation software (such as packages from ANSYS, Siemens, LAAMPS, WRF, FV3, NWChem, Helios, etc..)
●      Ability to manage multiple projects which are complex in nature and coordinate colleagues across diverse teams and locations.
●      Demonstrate understanding of HPC, scheduler, IaaS, scripting languages and how these tools are used and deployed by customers..
Rescale is an Affirmative Action, Equal Opportunity Employer.  As part of our standard hiring process for new employees, employment with Rescale will be contingent upon successful completion of a comprehensive background check.   
Share this job:
Software Engineer & Architect
cs dot net sql nosql serverless machine learning Jan 20

Looking for a New Challenge?

Want to work for a Growing Business?

Do you have the Experience and Knowledge we are looking for?

Your Role and Responsibilities:

  • Manage deadlines that are set by Chief Technology Officer.
  • Monitoring of systems and alerts to identify significant trends and issues
  • Bring new ideas and best practice on the design, architectural and software development of the service.
  • Inspire, mentor and encourage colleagues to apply intelligently customised industry best practice.
  • Reviews designs, and software written by other developers for quality, best practice and architectural integrity.
  • Play a large part in the Continuous Integration Environment
  • Plan & build architecture to cater for growth
  • Help develop the team to refactor legacy code in more modern patterns
  • Participate in Scrum meetings and lab calls
  • Ability to write clear and concise system architectural documentation and summary diagrams
  • Develop, communicate and maintain a road map of the architectural developments
  • Monitor and review product and technology developments in the industry and present new ideas to BigChange
  • Liaise and communicate to other departments such as Product Development, Testing and Customer-Specific software development teams to add value to their roles.
  • Comply with company policies & processes, particularly for quality, data protection, information security & secure systems engineering (supporting the introduction of ISO27001)
  • Provide top class service to our customers
  • Provide training support to other members of the company and be a brand ambassador to BigChange.

Competencies:

  • 10 years + experience in a similar environment (Essential)
  • Experience as a software developer (Essential)
  • Knowledge and experience of C# & .Net (Essential)
  • Knowledge and experience of SQL and NoSql technology (Essential)
  • Knowledge and experience in Machine Learning (Desirable)
  • Knowledge and experience in Serverless (Essential)
  • Experience in SaaS development (Essential)

Your Skills and Interests:

  • Technically strong, with recent hands-on experience in one of their core areas of technical delivery
  • An ability to advise senior stakeholders, work comfortably without definition and apply a progressive technical approach to any problem
  • You’ll show a good understanding of how to put together software
  • You’ll be an inquisitive technologist and naturally encourage others to be alike.
  • You’ll convey a sense of credibility and trust within the team and BigChange
  • You will have exceptional people skills and ability to objection handling where required
  • Ability to work individually as well as a team with a adaptable and flexible approach to work
  • You will have hands-on experience in System Software Development
  • Ability to communicate at all levels both written and verbally
  • You will have an organised and proactive approach to work

Your Rewards:

  • Upto £80,000 p/a - depending on experience
  • Expenses paid for all overnight stays, subsistence and mileage whilst on business.
  • Over 25 days holidays, plus bank holidays, plus ‘BigChange Birthday’.
  • Pension plan (NEST 3% employer, 3% employee)
  • Gym membership assistance £20 gross pay monthly (for 12 months).
  • Annual eye test reimbursement
  • Free massage in the office
  • “Motivational Mondays” – inspiring talks monthly from extraordinary people.
  • Local fruit delivered weekly to the office.
  • Being part of a supportive team with the ability to learn new skills and grow within the company.
  • Experience cutting edge technology and be part of a company that is shaping the future.

Location of work: Office or home-based

Reporting to: Chief Technology Officer

Share this job:
Senior Data Scientist
python aws tensorflow pytorch scikit-learn senior Jan 17

XOi Technologies is changing the way field service companies capture data, create efficiencies, collaborate with their technicians, and drive additional revenue through the use of the XOi Vision platform. Our cloud-based mobile application is powered by a robust set of machine learning capabilities to drive behaviors and create a seamless experience for our users.

We are a group of talented and passionate engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and solutions to the challenges they face in their workplace. Problems and solutions typically center around aspects of the Vision platform such as image recognition, natural language processing, and content recommendation.

As a Senior Data Scientist, you will build machine learning products to help automate workflows and provide valuable assistance to our customers. You’ll have access to the right tools for the job, large amounts of quality data, and support from leadership that understands the full data science lifecycle. You’ll build models using technologies such as Python, Tensorflow, and Docker.

Responsibilities:

  • Interpret and understand business needs/market opportunities, and translate those into production analytics.
  • Select appropriate technologies and algorithms for given use cases.
  • Work directly with product managers and engineering teams to tightly integrate new analytic capabilities.
  • Prepare reports, visualizations, and other documentation on the status, operation and maintenance of the analytics you create.
  • Stay current on relevant machine learning and data science practices, and apply those to existing problem sets.

Requirements: 

  • Excellent understanding of machine learning algorithms, processes, tools, and platforms including: CNN, RNN, NLP, Tensorflow, PyTorch, etc.
  • Proficient with the following (or comparable): Linux, Python, scikit-learn, NumPy, pandas, spaCy.
  • Applied experience with machine learning on large datasets/sparse data with structured and unstructured data.
  • Experience with deep learning techniques and their optimizations for efficient implementation.
  • Great communication skills, ability to explain predictive analytics to non-technical audiences
  • Bachelor’s in Math, Engineering, or Computer Science (or technical degree with commensurate industry experience).
  • 3+ years of relevant work experience in data science/machine learning.

Nice to Have:

  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • DevOps experience with continuous integration/continuous deployment.
  • Experience in software engineering best practices, principles, and code design concepts.
  • Speech-to-text or OCR expertise.

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Senior Back End DevOps Engineer
aws security kubernetes shell python devops Jan 16

As more companies adopt public cloud infrastructure and the increase sophistication and harm caused by cyber attacks, the ability to safeguard companies from these threats have never been more urgent.  

Lacework’s novel approach to security fundamentally converts cyber security into a big data problem.  They are a startup based in Silicon Valley that applies large scale data mining and machine learning to public cloud security.  Within a cloud environment (AWS, GCP, Azure), their technology captures all communication between processes/users/external machines and uses advanced data analytics and machine learning techniques to detect anomalies that indicate potential security threats and vulnerabilities. The company is led by an experienced team who have built large scale systems at Google, Paraccel (Amazon Redshift), Pure Storage, Oracle, and Juniper networks.  Lacework is well funded by a tier one VC firm and is based in San Jose, CA.

They are looking for a Senior DevOps engineer with strong AWS and Kubernetes experience who is excited about building an industry leading, next generation Cloud Security System.

You will be a part of the team that architects, designs, and implements highly scalable distributed systems that provide availability, scalability and performance guarantees. This is a unique and rare opportunity to get in on the ground floor and help shape their technologies, products and business.

Roles/Responsibilities

  • Assist in managing Technical Operations, Site Reliability, production operations and engineering environments 
  • Run production operations for their SaaS product
    • Manage the monitoring System
    • Debugging live production issues
    • Manage Software release roll-out
  • Use your engineering skills to promote platform scalability, reliability, manageability  and cost efficiency
  • Work with the engineering and QA teams to provide your valuable feedback about how to improve the product
  • Participate in on-call rotations (but there is really not a lot of work since you will automate everything!)

Requirements:

  • 4+ years of relevant experience (Technical Operation, SRE, System Administration)
  • AWS experience 
  • Experienced Scripting skills Shell and / or Python 
  • Eager to learn new technologies
  • Ability to define and follow procedures
  • Great communication skills
  • Computer Science degree 
Share this job:
Senior Data Scientist / Backend Engineer
komoot  
aws data-science machine-learning kotlin python backend Jan 16

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and, with more than 8.5 million users and 50,000 five-star reviews - komoot is on its way to become one of the most popular cycling and hiking apps. Join our fully remote team of 60+ people and change the way people explore!


To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services. With over 8 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience.

We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.

What you will do

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house, co - working of your choice, our HQ in Berlin/ Potsdam or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like Pandas, Numpy, Jupyter Notebooks, Seaborn, Scikit-Learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Then send us the following:

  • Your CV in English highlighting your most relevant experience
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Software Engineer
python-3.x flask microservices data science machine learning saas Jan 14

Carbon Relay is a world-class team of software engineers, data scientists and devops experts focused on harnessing the power of machine learning to help organizations achieve the most with their Kubernetes-based applications. With our innovative optimization platform, we help boost application performance while keeping costs down.

We’re looking for a Software Engineer to work on the next generation of K8s optimization products that bridge the gap between data science, engineering and DevOps. You’ll be working closely with our engineering and data science teams, helping bring products from R&D into production and making our products scale efficiently. 

Responsibilities

  • Design and implement features as part of SaaS-based microservices platform
  • Contribute to and enhance internal APIs and infrastructure
  • Work alongside our data science team to integrate machine learning into our products

Required qualifications

  • 1-3 years of software engineering experience
  • Experience with Python
  • Experience shipping and maintaining software products
  • Experience working with Git and GitHub

Preferred qualifications

  • Familiarity with Kubernetes and Containerization 
  • Experience with GCP/GKE
  • Experience developing SaaS applications / microservice architectures

Why join Carbon Relay:

  • Competitive salary
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!

Overview

Responsibilities

Share this job:
Senior Data Scientist
r machine-learning python apache-spark cluster-analysis senior Jan 08

In the Senior Data Scientist role, you will have full ownership over the projects you tackle, contribute to solving a wide range of machine learning applications, and find opportunities where data can improve our platform and company. We are looking for an experienced and creative self-starter who executes well and can exhibit exceptional technical know-how and strong business sense to join our team. 


WHAT YOU'LL DO:

  • Mine and analyze data from company data stores to drive optimization and improvement of product development, marketing techniques and business strategies
  • Assess the effectiveness and accuracy of data sources and data gathering techniques
  • Develop and implement data cleansing and processing to evaluate and optimize data quality
  • Develop custom data models and algorithms to apply to data sets
  • Run complex SQL queries and existing automations to correlate disparate data to identify questions and pull critical information
  • Apply statistical analysis and machine learning to uncover new insights and predictive models for our clients
  • Develop company A/B testing framework and test model quality
  • Collaborate with data engineering and ETL teams to deploy models / algorithms in production environment for operations use
  • Develop processes and tools to monitor and analyze model performance and data accuracy
  • Ad-hoc analysis and present results in a clear manner
  • Create visualizations and storytelling
  • Communicate Statistical Analysis and Machine Learning Models to Executives and Clients
  • Create and manage APIs

WHO YOU ARE:

  • 3-5+ years of relevant work experience
  • Extensive knowledge of Python and R
  • Clear understanding of various analytical functions (median, rank, etc.) and how to use them on data sets
  • Expertise in mathematics, statistics, correlation, data mining and predictive analysis
  • Experience with deep statistical insights and machine learning ( Bayesian, clustering, etc.)
  • Familiarity with AWS Cloud Computing including: EC2, S3, EMR.
  • Familiarity with Geospatial Analysis/GIS
  • Other experience with programming languages such as Java, Scala and/or C#
  • Proficiency using query languages such as SQL, Hive, and Presto
  • Familiarity with BDE (Spark/pyspark, MapReduce, or Hadoop)
  • Familiarity with software development tools and platforms (Git, Linux, etc.)
  • Proven ability to drive business results with data-based insights
  • Self-initiative and an entrepreneurial mindset
  • Strong communication skills
  • Passion for data

WHAT WE OFFER:

  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • Lunch provided 2x a week 
  • Snacks, snacks, snacks!
  • Casual dress code
Share this job:
Data Science Course Mentor
python sql hadoop data science machine learning Jan 08

Apply here


Data Science Course Mentor

  • Mentorship
  • Remote
  • Part time


Who We Are
At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time. 

We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do. 

The Position
Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed technologists. As a Course Mentor, you will support students by acting as an advisor, counselor, and support system as they complete the course and land their first industry job. To achieve this, you will engage with students using the below range of approaches, known as Engagement Formats. Course Mentors are expected to provide support across all formats when needed. 

  • Mentor Sessions: Meet with students 1-on-1 in online video sessions to provide technical and professional support as the student progresses through the curriculum.
  • Group Sessions: Host online video sessions on topics of your expertise (in alignment with curriculum offerings) for groups of student seeking live support between mentor sessions. 
  • Grading: Reviewing student checkpoints submissions and delivering written feedback, including analysis of projects and portfolios. 
  • Technical Coaching: Provide in-demand support to technical questions and guidance requests that come to the Technical Coaching team through text and video in a timely manner. This team also provides the TA support for immersive programs. 
  • Assessments & Mock Interviews: Conduct 1-on-1 mock interviews and assessments via video calls and provide written feedback to students based on assessment rubrics. 

In addition to working directly with students, Course Mentors are expected to maintain an environment of feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment.

Requirements
  • Minimum of 3 years professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
  • Proficiency in SQL, Python
  • Professional experience with Hadoop and Spark a plus
  • Excellent written and verbal communication
  • High level of empathy and people management skills
  • Must have a reliable, high-speed Internet connection

Benefits
  • This is a part-time role (10-25 hours a week)
  • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
  • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
  • Full access to all of Thinkful Courses for your continued learning
  • Grow as an Educator

Apply
If you are interested in this position please provide your resume and a cover letter explaining your interest in the role.

Thinkful can only hire candidates who are eligible to work in the United States.

We stand against any form of workplace harassment based on race, color, religion, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status. Thinkful provides equal employment opportunities to all employees and applicants. If you're talented and driven, please apply.

At this time, we are unable to consider applicants from the following states: Alaska, Delaware, Idaho, New Mexico, North Dakota, South Carolina, South Dakota, West Virginia, and Wyoming

Apply here
Share this job:
Senior Fullstack Software Engineer
senior javascript data science machine learning frontend testing Jan 06
About Labelbox

Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

Responsibilities

  • Strong understanding of Javascript with an interest in using Typescript
  • Experience managing/scaling SQL databases, orchestrating migrations, and disaster recovery
  • Experience working with Redux and architecting large single page applications
  • Experience and interest in frontend testing
  • Optimizing data models and database configurations for both ease-of-use and performant response times
  • Building new features and resolvers in our GraphQL API with Node.JS

Follow-on Responsibilities

  • Experience with SQL databases
  • Experience optimizing web traffic
  • Experience with RabbitMQ (or other message broker) and Redis
  • Experience constructing and monitoring ETL pipelines
  • Experience with Logstash / Elasticsearch
  • Familiarity with Kubernetes and Docker

Requirements

  • 4+ years of experience building data rich frontend web applications
  • A bachelor’s degree (or equivalent) in computer science or a related field.
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
Back-End Software Engineer
Nor1  
mongodb python docker mysql security backend Jan 03

We are looking for a back-end Software Engineer to help us build the next generation of our upsell decisions platform. You will join the Nor1 Tech team, who are a collaborative group of engineers, product managers, and data scientists. Rather quickly, we will look to your technical expertise to create reliable, scalable, and high-performance components. 

Primary Responsibilities

  • Own the design, implementation, testing, and maintenance of our backend components:  applications, data, infrastructure, analytics, and deployment. 
  • Establish architectural principles, select design patterns, and lead engineers on their applications. 
  • Work with the team to investigate design approaches, prototype new technology, and evaluate technical feasibility. 
  • Stay current with best practices and emerging technologies to incorporate into our operations and stack. 

Skills & Qualifications

  • 5 years experience with building high-performance, highly-available and scalable distributed systems.
  • BS or MS in Computer Science or a related technical field preferred.
  • High Proficiency in Python;
  • Proficiency with Docker and containerized micro services in major clouds (AWS, GCS, Azure…)
  • PHP or NodeJS is a plus. 
  • Experience with MySQL and MongoDB data stores.
  • Know-how with secure coding practices, e.g. OWASP guidelines, is preferred. 
  • Developed and deployed applications on AWS; integration with AWS managed services is a plus. 
  • Have DevOps mentality, reduce friction with automation.
  • Working understanding of CI/CD and configuration management.
  • Exposure to machine learning pipelines and online inference 
  • A willingness to dive deep, experiment rapidly, and get things done.

Nor1 Technology Stack

  • MongoDB, MySQL, Redshift, Redis
  • NginX, Route53, Apache, ELB
  • Mix of AWS cloud services and IBM Cloud (bare metal servers)
    • Centos, Amazon Linux, Windows Server
  • Python (main), PHP, Javascript, NodeJS
  • Docker, Swarm, K8s
  • OpsGenie, Jira, Confluence, Nagios, Pingdom, ELK stack, Docker, Detectify, Tennable.io
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Data Engineer: AI/ML
pytorch python machine-learning fast-ai pipeline ruby Dec 26 2019

Roadtrippers Place Lab powers the geo-data for Roadtrippers consumer web and mobile applications and the underlying B2B services.  Roadtrippers Place Lab is looking for a detail-oriented problem solver to join the team as a Data Engineer focusing on all things geo-data. This engineer will share the responsibility of data quality and fidelity with our engineering, data science, and data quality teams by developing better ways to evaluate, audit, augment, and ingest data about places.

Responsibilities

  • Work with the AI/ML research team in developing new models and pipelines to derive insights and improve our data quality
  • Bridge AI/ML research to assist in building production pipelines and improve the efficiency transitioning from development
  • Own production AI/ML pipelines including revisions, optimizations and detecting root-cause anomalies
  • Assist in planning and implementation of data ingestion, sourcing, and automation projects
  • Communicate with Engineering and Product teams about requirements and opportunities as it relates to new data and schema updates
  • Contribute to application development for data initiatives 
  • Identify, participate and implement initiatives for continuous improvement of data ingestion, quality, and processes.
  • Manually manipulate data when necessary, while learning and applying these needs to scale future projects

Qualifications

  • Experience with Data Science/ML/AI
  • Experience working with geospatial data is a huge plus
  • Development experience with Python
  • Knowledge of SQL (ideally Postgres), Elasticsearch and schemaless databases
  • Experience with ETL and implementing Data Pipeline architecture 
  • AWS and SageMaker experience is particularly valuable 
  • Big data experience is ideal 
  • Understanding of web application architecture, Ruby and Ruby on Rails experience is a plus
  • A "do what it takes" attitude and a passion for great user experience
  • Strong communication skills and experience working with highly technical teams
  • Passion for identifying and solving problems
  • Comfort in a fast-paced, highly-dynamic environment with multiple stakeholders

We strongly believe in the value of growing a diverse team and encourage people of all backgrounds, genders, ethnicities, abilities, and sexual orientations to apply.

Share this job:
VP of Engineering - Series A Funded Data Startup
scala python machine-learning apache-spark hadoop machine learning Dec 24 2019
About you:
  • High velocity superstar.
  • You want to challenge of growing and managing remote teams
  • You love really hard engineering challenges
  • You love recruiting and managing super sharp people
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • you walk through walls 
  • you want to help build a massive company
  • you live in the United States or Canada
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco, Denver, and New York City but about 50% of the team is remote (all currently in the U.S.). We get the entire company together in the same place every month.


About the role:


  • Member of the executive team and reporting directly to the CEO.
  • Oversee all engineering and machine learning
  • Core member of the executive team 

Opportunity to be:

  • one of the first 40 people in a very fast growing company 
  • be one of the core drivers of company's success 
  • work with an amazing engineering team 
  • be on the executive team 
  • potential to take on more responsibility as company grows 
  • work with only A-Players
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Senior Dev Ops Engineer
aws docker pulumi kubernetes terraform ops Dec 23 2019

What we are looking for:

The Senior DevOps Engineer is a high-impact role where you will work closely with the software engineering teams to help them deploy critical software to AWS. This position will have significant impact on building out our growing infrastructure, implementing infrastructure-as-code with Pulumi, and leveraging Kubernetes for orchestration of Docker containers.

A passion for data security is key as you will be frequently dealing with patient data that falls under HIPAA regulations. Your desire and ability to collaborate, mentor, and learn is critical to your success at IDx. You must have a demonstrable enthusiasm for good DevOps practices and an irresistible urge to share them with others. You are someone who identifies issues early and brings them to the table along with multiple solutions. You believe that continuous improvement is key to the difference between good and great and can inspire others to follow your example. You love learning and teaching and find satisfaction in multiplying the effectiveness of those around you.

To be successful in this role you must be able to:

  • Apply best practices and help others understand their importance.
  • Effectively document the architecture, design and functionality of implementations.
  • Have awareness of new trends, technologies, and tools and understand when to apply them.
  • Communicate complicated technical concepts to diverse audiences.
  • Make others better through documentation, technical guidance, and mentoring.

Requirements:

  • Strong analytical skills with great verbal and written communication.
  • Experience architecting, designing, implementing, and deploying complex solutions.
  • Experience with both Linux and Windows.
  • Experience with Docker and Kubernetes.
  • At least 5 years’ DevOps experience.
  • 3+ years’ experience building "Infrastructure as Code" with a strong understanding of Amazon Web Services (AWS) Experience making decisions that balance the tradeoffs between technical and business needs.
  • Applicants must not now or in the future need IDx to sponsor an immigration case for employment (for example, H-1B or other employment-based immigration cases).

What will help you in this role:

  • Experience delivering highly secure and efficient solutions within comprehensive compliance regulations.
  • AWS Certified Security Specialty.
  • Experience with infrastructure-as-code tools such as Pulumi or Terraform.
  • Experience working with data in a HIPAA compliant environment.
  • Knowledge of or interest in the medical device software development industry.
  • Experience working in an ISO- or FDA-regulated environment, or another highly regulated environment (FAA, etc.) and working with Quality Management Systems.

IDx is a challenging and rewarding environment that provides amazing opportunities to:

  • Work on unique opportunities that will be hard to find at other companies.
  • Work on the first ever autonomous AI system cleared by the FDA to provide a diagnostic decision.
  • Work with world-renowned doctors who are pushing the limits of machine learning in medicine.
  • Tackle complex problems/projects with the highest levels of quality and execution for audiences that include top technologists, the FDA, and world-leading healthcare providers.
  • Push the accessibility and quality of healthcare to new heights to improve the lives of millions of people.
Share this job:
Full-stack Software Engineer
node-js typescript vue-js rust grpc java Dec 20 2019

Engineering | Zurich, Switzerland | Full-time / contracting

What you tell your family you do:

“I’m building the world’s largest drone ecosystem by designing web tools to help other software developers build better drones. Our software runs on all kinds of drones used for inspection, search and rescue, safety, delivery and more. No, I will not deliver you a pizza with a drone (yet)!“

What you really do:

As a Full-Stack Software Engineer, you are responsible for building developer-facing web tools for the autonomous robot software development cycle. You will build responsive web tools for data visualization which help developers in the ecosystem build better drones.

In detail you will:

  • Build and maintain a software architecture that interacts with large amounts of data on the client-side in a highly responsive UI
  • Work with a dockerized service-oriented application consisting of a Single Page App (Vue.js/TypeScript), a frontend server (Node.js/TypeScript), backend services (Rust/GRPC), and PostgreSQL hosted in AWS
  • Work with data scientists who are building machine learning flight performance analytics in the app (Python/TensorFlow/Kubernetes)
  • Be responsible for holding the highest bar for the entire software life cycle, from strategic planning to tactical activities, to execution and implementing solutions for customers
  • Follow the principles of Test Driven Development and always have cybersecurity as first priority
  • Write clear, well documented and easy to maintain code
  • Work with Auterion and community developers in an agile software development workflow and participate in dev calls, forums and meetings
  • Identify and implement new trends in ground control stations, dashboards and apps for drones
  • Train and mentor other members in the team
  • Manage your day-to-day development activity with GIT and champion easy-to-read and easy-to-maintain git histories consisting of small, well-commented commits
  • Be an ambassador of the open source community to Auterion customers and other business stakeholders

You bring:

  • Fundamental concepts and algorithms from a Computer Science degree (or relevant) or equivalent years of working experience
  • 4+ years of professional working experience with any statically typed programming language (C, C++, C#, Java, or similar), previous exposure to TypeScript or Rust is advantageous
  • Deep experience of architecting and developing complex, greenfield, full-stack web app solutions ideally in a lean environment
  • Knowledge of state-of-the-art toolkits and libraries to build responsive web UIs such as - Angular, React, or Vue
  • Willingness to learn Rust
  • Familiarity with cybersecurity requirements for client-side execution and data sharing
  • Experience building and debugging complex systems in a team environment
  • Positive attitude, and empathy
  • Self-­awareness and a desire to continually improve

How to really knock our socks off:

  • Experience with Continuous Deployment workflows with multiple deployments to production every day
  • Experience with distributed systems
  • Experience with IoT
  • Exposure to TypeScript or Rust
  • Affinity for machine learning

What you will find at Auterion:

  • As a part of Auterion, you’ll indeed be able to have a high impact on the technology of the future: drones & robotics.
  • A headquarters in the heart of Switzerland in Zurich’s bustling city life, while being close to the natural beauty of the countryside and – of course – the Swiss Alps.
  • Plenty of opportunities to fly all possible drones in the field just 2 minutes from our office :)
  • We have all the perks you would expect from a great startup: an amazing industry (Robotics and Drones...need we say more), incredible office space, a smart multidisciplinary team, a flexible workplace, plenty of food in the kitchen, and a pure mission that binds us all together
  • Competitive salary and generous stock option plan.

We only accept online direct applications. Applications via agencies will not be considered.



Share this job:
Marketing Operations Manager
manager data science machine learning computer vision healthcare Dec 18 2019
Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

Labelbox is hiring a Marketing Operations Manager to join our growing Marketing team you will be responsible for managing our marketing and sales operations infrastructure.

As an early marketing hire on our team, you will help:

  • Build-out and manage a marketing operations stack through best practices (we use Hubspot as our CRM).
  • Manage our CRM data quality.
  • Work closely with marketing and sales team members to build workflows and processes in our CRM (and peripheral tools) that mirror our lead generation and sales processes.
  • Configure and build dashboards with key metrics from sales and marketing teams.
  • Implement, and continuously improve, attribution and lead-scoring models.
  • Manage the inflow of inbound leads, ensuring leads are properly enriched and populated with sufficient data to empower SDRs to establish outreach with minimal friction.
  • Manage integrations between web, product, ad-platform and sales analytics tools.
  • Conduct routine reporting and provide ad-hoc insights to marketing, sales and management teams.
  • Monitor the health of our marketing and sales funnels and surface insights and suggestions as required.

The ideal candidate will have:

  • An undergraduate degree with an emphasis in Marketing, Business or a related field
  • 3+ Years Experience in a technical marketing or sales function
  • Familiarity with the following products and languages: Hubspot (or Salesforce), Marketing Automation Platforms (MAPs), Google Data Studio (or a similar visualization product), Zapier, Microsoft Excel, SQL, Google Analytics, Google Tag Manager, Facebook Ads Manager or Google Ads, Sales outreach tools such as Apollo.io
  • Exposure to both startup and enterprise marketing stacks
  • Familiarity with the B2B Marketing and Sales process

  • BONUS Points if you have experience working with data warehouses, Customer Data Platforms and DMPs.

Expertise in each and every function is not required, we are looking for candidates who exhibit full-stack marketing knowledge and the aptitude to learn and develop skills.


Labelbox is an equal opportunity employer.

No sponsorship is available for this position. Valid US Work Authorization is required.


Share this job:
Senior Python Engineer
python aws-lambda graphql rest aws senior Dec 15 2019

The product engineering team is responsible for the creation and quality of the XOi Vision platform. This platform serves thousands of Field Technicians across the country.  We’re looking for a Senior Python Engineer (Analytics) to play a key role in building and maintaining the backend code and services that support our mobile and web applications. 

We are a group of talented and passionate group of engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and assistance to the problems they face in their workplace. Problems and solutions typically center around aspects of the XOi platform such as image recognition, natural language processing, and content recommendation.

As a senior-level engineer on the analytics team, you will build applications and data pipelines to curate and organize XOi’s data. Data is our most valued asset, and in this position, you will be a key contributor to the team. You’ll build applications using technologies such as Python (AWS Lambda), Docker, GraphQL and DynamoDB.

Responsibilities:

  • Build effective, well-tested server and APIs
  • Build data pipelines and web scrapers
  • Build containerized services for machine learning models
  • Assist in gathering and implementing requirements for data applications
  • Take ownership for application components and ensure quality throughout the development process
  • Build and maintain CI / CD pipelines.
  • Create reports, dashboards and documentation on the status, operation, and maintenance of the applications you build

Requirements: 

  • Bachelor’s degree in Computer Science or equivalent field (or 6+ years of working experience).
  • 3+ years of demonstrated experience building and deploying applications or services in a cloud infrastructure environment.
  • Expertise with functional or object-oriented program design patterns with a demonstrated ability to choose between and synthesize them.
  • Experience with both statically and dynamically typed programming languages and a solid understanding of the strengths and weaknesses of both paradigms.
  • Good understanding of REST-based services and service-based architecture.
  • Experience in developing best practices, software principles, and code design concepts.
  • Experience in developing and supporting rapid iterations of software in an Agile context.

Nice to Have:

  • Experience with CI/CD development and organizational practices
  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • Experience deploying machine learning models with Tensorflow or similar deep learning frameworks
  • Experience with web-development frameworks and visualization libraries such as React and D3.js

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Data Scientist
python machine learning computer vision mongodb healthcare aws Dec 12 2019
We are looking for a talented Data Scientist to join our team at Prominent Edge. We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at http://prominentedge.com for more information and apply through http://prominentedge.com/careers.

Ideal candidates are those who can find value out of data.  Such a person proactively fetches information from various sources and analyzes it for a better understanding of the problem, and may even build AI/ML tools to make insights. The ideal candidate is adept at using large datasets to find the right needle in a pile of needles and uses models to test the effectiveness of different courses of action. Candidates must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes.  A successful candidate will have experience in many (if not all) of the following technical competencies including: statistics and machine learning, coding languages, databases, and reporting technologies.

Required Skills

  • Bachelor's Degree in Computer Science, Information Systems, Engineering or other related scientific or technical discipline.
  • Proficient in data preparation, exploration, and statistical analysis
  • Proficient in a programing language such as Python, Rm Julia, or JavaScript
  • Experience with batch scripting and data processing
  • Experience with Machine Learning libraries and frameworks such as TensorFlow/Pytorch or Bayesian Analysis using SAS/R Studio.
  • Experience with databases such as Postgres, Elasticsearch, MongoDB, or Redis

Desired Skills

  • Master's degree in Computer Science or related technical discipline.
  • Experience with natural language processing, computer vision, or deep learning
  • Experience working with geospatial data
  • Experience with statistical techniques
  • Experience as either back-end or front-end/visualization developer
  • Experience with visualization and reporting technologies such as Kibana or Tableau

W2 Benefits

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.
Share this job:
Senior Robotics Developer / Specialist / Researcher
c cpp robotics senior machine learning testing Dec 08 2019

Job Responsibilities

Lead and/or collaborate in the design, development, and testing of robot algorithms, in one or more of these areas:

- Robot motion, especially reactive planning and replanning techniques, and machine learning for skill acquisition.
- Robot vision, SIFT and other approaches to both face and object recognition, vision processing in service of SLAM, and surface modeling in service of manipulation.
- Manipulation and grasping, especially software compliant approaches that are compatible with a variety of grips and objects, e.g. a cupping motion, a pinching action, and a more generic grasp.
- Hardware design, especially of compliant end-effectors. 
- ... and more ...

About you

You are looking for the place to stretch yourself, able to work within a senior, highly performant product team, and aren't afraid of a challenge.  You are a self-starter with the motivation and skills needed to effectively operate on your own time in your own way while being responsive to the needs of your team mates and the team as a whole.

You love working on the metal and/or deep in low-level or scaled algorithmic code.  You are effective at quickly understanding and operating on algorithms taken from research in AI, Robotics, and Vision, or you have have experience with sensors, motors, and other devices.  You are comfortable working alongside experts in these areas, or are an expert yourself. You have a proven track record of delivering ideas into working prototypes at high velocity.  You have commercial/agile development teaming experience. You've architected some serious systems and may have even been a team lead.

Skill Set / Experience

We welcome people with passion on designing robots to join us. Among our whole bunch of positions there may be one for you which reflects your dreams of perfect job so make sure you check every single one of them!

Flexible Hours & 100% Remote Work

You can work in one of our offices (Taipei, Vienna or Wroclaw), but most of these roles permit 100% remote cooperation.

You will work in a scrum-based agile development cycle. You will be working alongside founders, researchers, and engineers to design and build first-generation robotic solutions for mass consumer adoption.

Whether you prefer contract work or a permanent position, we can accommodate you.

Share this job:
Sr. Data Scientist
python machine learning design nlp Dec 07 2019
THE JOURNEY TO YOUR DREAM JOB COULD BE JUST A CLICK AWAY…
In 2012, Tuft & Needle(tn.com) revolutionized the mattress space by turning the focus to the customer with always-honest pricing, an insistence on high-quality products, and world-class customer experience. We started our journey with two software engineers and a dream and today we have grown to a team of more than 175 talented people, working each day to bring the world premium sleep products at an honest cost.

As a Data Scientist, you'll be an important part of the company's decision-making process. You will help us understand how things are related to each other, which approaches are working, and which aren't. You'll also help us maintain our data-infrastructure. This includes our reporting and data management, as well as automated statistical and machine learning tools.

Together, we are radically reshaping how we think about sleep, mattresses, and shopping - and we’re just getting started. Want to join us?

*Open to remote opportunity

RESPONSIBILITIES:

    • Write programs to automate analyses and data wrangling
    • Build machine learning models to forecast and understand customer behavior
    • Maintain and improve reporting in Looker, Metabase, and R
    • Explain analyses and discoveries with articles and presentations


REQUIREMENTS:

    • Strong knowledge of statistics and inference
    • 2+ years writing and maintaining code
    • 2+ years working with SQL
    • Experience communicating statistical concepts to a broad audience


PREFERRED EXPERIENCE:

    • Programming in R and/or Python
    • Managing and organizing a large codebase
    • Experience with Bayesian Methods
    • Deep experience in some part of statistics (Ex: time series analysis, experimental design, multivariate analysis, natural language processing, etc.)
    • Interest in functional style programming
    • Interest in causal inference


YOU CAN SLEEP BETTER WHEN YOU WORK AT T&N
Our people – You will be working alongside some of the most talented, supportive, savvy individuals out there… people we are so proud to work with.  Together, we are shaking things up in the mattress industry and delivering an experience for clients that they would never expect.

Our product – Each team member receives a great bundle of products for themselves.  You will too if you join the team!  Your friends and family will also have access to a great product discount.

Our benefits - We offer comprehensive health benefits for you, eligible partners and dependents, paid maternity & paternity leave, 401k with a match, a generous vacation plan, and so much more. 

Tuft & Needle is proud to be an equal opportunity employer. We will not discriminate against any applicant or employee on the basis of age, race, color, creed, religion, sex, sexual orientation, gender, gender identity or expression, medical condition, national origin, ancestry, citizenship, marital status or civil partnership/union status, physical or mental disability, pregnancy, childbirth, genetic information, military and veteran status, or any other basis prohibited by applicable federal, state or local law.

Your experience is important to us. If you have any questions with your application, please contact our Candidate Experience Team at talent@tuftandneedle.com
Share this job:
Senior NodeJS/React Developer
node-js javascript senior html css machine learning Dec 07 2019

*This position can be remote, but US based candidates only.

Dealer Inspire, a CARS Inc. company, is hiring for our Conversations Team!

Conversations is Dealer Inpire's messaging platform that connects today’s car shoppers with dealerships wherever, whenever, and however they want to shop. Fast, mobile, and fully integrated with text messaging and Facebook Messenger™ Conversations  uses A.I. technology and managed chat support to instantly respond to all incoming chats 24/7.

Essential Duties & Responsibilities (including, but not limited to):

  • Development of new features, including adding functionality to our AI chat bot, Ana.
  • Writing high quality, clean code that is paired with automated unit and integration tests.
  • Taking new features through the entire development lifecycle, working in conjunction with our product owner to define the feature, develop it, and test it.
  • Refactoring non-ideal portions of both our Node API and our React apps.
  • Mentoring developers in your area of expertise.

Required Skills & Experience:

  • 3+ years of professional experience working with NodeJS; including the Express framework.
  • 2+ years of professional experience with front-end technologies; including React, Redux, Webpack.
  • Mastery of JavaScript, HTML, and CSS/SASS/StyledComponents.
  • 5+ years of professional experience working with SQL databases; the ability to write efficient queries and benchmark/profile them.
  • Strong understanding of asynchronous programming.
  • Experience with performance debugging and benchmarking.
  • Experience with testing frameworks, such as karma, mocha, or jest.
  • Experience with Git version control.
  • Understanding of CI/CD.
  • Strong attention to design detail (UI/UX).
  • Strong verbal & written communication skills.
  • Strong documentation skills.
  • Experience working remotely & as part of a distributed engineering team.

Highly Desired:

  • AWS Cloud Architecture
  • Typescript
  • Understanding of NLP and Machine Learning • Mobile-first, responsive web design
  • MySQL
  • Algolia
  • Some experience with PHP

About Dealer Inspire: 

Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.

DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today.

Want to learn more about who we are? Check us out here!

Perks:

  • Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), and Eye Med Vision
  • 401k plan with company match
  • Tuition Reimbursement
  • 13 days paid time off, parental leave, and selected paid holidays
  • Life and Disability Insurance
  • Subsidized gym membership
  • Subsidized internet access for your home
  • Peer-to-Peer Bonus program

*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.

Share this job:
Senior Software Engineer, Fullstack
java python javascript c data science machine learning Dec 03 2019

Engineering for you is more about a clean codebase, paradigms and algorithms than languages, frameworks or platforms. You have chosen your favorite stack in which you have lots of experience, but you’re able to get stuff done in any environment you need to and with every change you leave the codebase better off than before.

You will be one of the first members of our engineering team and will work on many different projects and touch many different systems: from our app backends (REST webservices) to our demand forecasting service and our cash register. Because our team is new, you will get to influence which technologies we will use.

As a Senior Software Engineer, you will become a go to person to answer technical questions for the rest of the team.

Responsibilities:

  • Create new and work on existing systems across a wide range of projects (e.g. a clean and elegant API layer spanning across all of our legacy systems, backend APIs consumed by our web and mobile apps, production tooling for our machine learning models etc.)
  • Improve and maintain our production environment, for example by adding monitoring and alerting (DevOps)
  • Set up a modern development workflow for our team, including a continuous integration pipeline and automated deployments
  • Work closely together with our frontend engineering and data science teams
  • Support other developers in your team with technical guidance

Requirements:

  • Minimum of 3 years of software development experience in a general purpose programming language
  • BSc degree in Computer Science, similar technical field of study or equivalent practical experience
  • Ability to quickly get up to speed in any programming language or system if needed
  • Ability to tackle problems outside your comfort zone and get things done without supervision
  • Excellent spoken and written communication skills in English

Desirable:

  • Experience in any of the following programming languages: Java, C/C++, C#, Python, JavaScript, Rust or Go
  • Experience working with one or more from the following: web application development, Unix/Linux environments, distributed and parallel systems, service oriented architectures, REST APIs, developing large software systems
  • Experience working in teams following an agile software development methodology
  • Basic knowledge of German

We also have a role for Junior / Mid-Level developers available here.

Share this job:
Software Engineers
cpp python docker machine learning design frontend Dec 03 2019

Overview:

Are you ready to be challenged, right from the interview process?  Are you looking to work with a highly intelligent but humble team? Do you want to work on cutting-edge cyber security problems and have the background to do it? Well then, this role may be for you.

GrammaTech is looking for software engineers at varying levels of experience to perform advanced software development. Build new components and extend existing tooling to meet project needs. Implement both exploratory research prototypes and high-quality products. Significant experience contributing to large projects, developing software, with focus on C++ and Python. 

REMOTE EMPLOYEES (MUST BE LOCATED IN THE USA) WILL BE CONSIDERED IF SKILLS AND EXPERIENCE MATCH.

Responsibilities:

A research-oriented software engineer is expected to: 

  • Study and implement approaches drawn from academic literature or in-house design
  • Evaluate the resulting prototype implementation to test its value in addressing the research goals
  • Report results to the PI and respond by adapting the prototype to better address research goals
  • Contribute to presentations and written reports to keep research sponsors up to date on project progress
  • Prepare prototypes for demonstrations and evaluations by research sponsors
  • Transition prototypes into deployable products 

Qualifications: Required:

  • BS in Computer Science or equivalent with a minimum of 3+ years demonstrated experience working in software development in C++ and Python. Knowledge of other languages is a plus.
  • Experience in development activities on large code bases with software design, build, and test from scratch
  • Familiarity with common software architectures, design patterns, and software development life cycle practices including effectively using revision control systems (git) and container technology (docker)
  • Knowledge of security and bug finding, capability of finding problems within software code

Preferred:

  • MS or PhD in computer science or equivalent
  • Experience in using Machine Learning Frameworks like scikit-learn, TensorFlow, Keras, etc.
  • Knowledge of machine code, such as ARM, x86, or x86-64
  • Static analysis for binaries and/or source code
  • Experience with fuzzing and sandboxing
  • Compiler design, compiler front-end integration, parsers
  • Dynamic analysis, program instrumentation, and profiling
  • System-administration experience, especially related to security
  • Malware-analysis techniques

About the Company:

We have offices in Ithaca, NY and Madison, WI — but will consider remote employees when there is a strong match of skills and experience.

Innovation is at the heart of GrammaTech. We are constantly pushing the boundaries of software research and development – from software assurance and software integrity to cyber-security threat mitigation and autonomic computing. 

GrammaTech was founded over 30 years ago, with a firmly-grounded purpose to help organizations develop tomorrow’s software.  Given the ever-increasing dependence of software in today’s connected world, our staff is able to focus on the most challenging software issues through a constant stream of highly innovative research and commercial development programs – focused on the evolving cyber-security landscape, software hardening and intelligent systems.  Within these projects, GrammaTech employees have the opportunity to work with industry, academic, and government experts, significantly advancing their skills in engineering, research, marketing, or sales.

GrammaTech, Inc. is an Equal Opportunity/Affirmative Action employer. 

Members of underrepresented groups are encouraged to apply, please call 607-273-7340 if assistance is needed.

Share this job:
Enveda data scientist
data science machine learning aws testing Dec 03 2019
About Turing Talent Programme
The Turing Talent Tech Careers Programme is a first-of-its-kind career empowerment programme for ambitious individuals in the technology sector. We have partnered up with Enveda to offer a data scientist role.

Through our programme, capture the added benefits of leadership development training, mentorship, and international peer network on top of your full time job with Enveda. 

About Enveda
At Enveda, we're re-imagining the roots of medicine with technology. Our inability to model the vast complexity of the human body and the infinite variables of the real world has led to more than 90% of drugs failing in clinical testing - so instead of depending on inbred mice or cells grown on plastic like everyone else, we're hunting for active molecules from plants that have been used by our ancestors for 1000s of years (and continue to be used by hundreds of millions today). We're endlessly optimistic about the resilience of these medicinal systems over millennia and are excited to unearth their potential at the most exciting time for technology in human history (see why ​here,​ ​here​, h​ere,​ and ​here​ just for a start). Using AI to prioritize potential drugs from 1000s of clinically used plants and precision AgTech to engineer their production, we're aiming to go from the lab to clinical trials with 3 new drugs in the next 5 years. Long-term, we will deliver multiple FDA approved medicines at a fraction of today's (unsustainable) R&D costs and emerge as the much-awaited pioneers in the "Reverse Translation" of human experience to validated drugs.

More details about Enveda here.

What will you be doing

  • Create a knowledge graph of the world’s information on natural medicines to make it computable
  • Develop new graph-based machine learning algorithms or applying state of the art techniques to mine insight from our biological networks
  • Create predictive models to identify the most interesting hypotheses to pursue in the lab
  • Design statistical models to predict best drug candidates and combinations from a mixture of potentially active phytochemicals
  • Work hand-in-hand with an experimental laboratory team and a bioinformatics team to analyze streams of cutting edge biological datasets to constantly improve our predictive power
  • Get in on the ground floor of a rapidly growing venture-backed US startup backed by top Angels and VCs
  • Be a co-owner of Enveda’s mission and vision, with generous equity compensation
  • Work remotely, with a headquarter in SF for when you want company!

Required Skills

  • An aspiring Data Scientist that is, first and foremost, passionate about applying technology to make life-changing drugs
  • Have an advanced degree in Computer Science or a related field
  • Have a background in data science or have worked with a large amount of data
  • Have experience building research prototypes or MVPs in an academic or industry setting
  • Have experience working with a programming language like Python
  • Have some knowledge of modern tools for ML such as TensorFlow, PyTorch, PyTorch Geometry, or PySpark
  • Ability to think big-picture and handle the minutiae simultaneously
  • Demonstrated desire for continuous learning and improvement
  • Strong communication 

Desired Skills

  • Have some background in biology or chemistry (ideally)
  • Have worked with graph-based data structures
  • Have experience with using and deploying the latest graph algorithms and predictive models (GNN’s, link prediction and so on..) 

Compensation

  • £48k to £70k 

Start date

  • Immediately

Location

  • Remote
About Turing Talent Programme training:
Turing Talent Programme will kick off with a 2 to 4 week intensive bootcamp training that covers technical skills and soft skills. The technical skills will include those that specifically correspond to this placement with Deloitte, with a focus on software engineering. You will dive deeper into fullstack languages and frameworks, and how to apply this knowledge in your new role with DFA. ElasticSearch, AWS, and JIRA will all be part of the training. 


Turing Talent is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Share this job:
Product Owner to Transform the Shipping Industry
machine learning ux ui design Nov 12 2019

Container Shipping: An industry with huge potential to make a difference

90% of all goods globally are transported through a container. The largest container vessels are 400 meters long and can transport 20,000 containers at a time. The container shipping industry is the backbone and enabler of global trade, but it is struggling. At its core the industry is fundamentally inefficient: 50% of all container vessels globally are delayed coming into port, and key planning processes are done manually on a global scale. This leads to high operational costs, lost revenue, and unnecessarily high greenhouse gas emissions. At Portchain, we work with our customers to reduce operational complexity and optimize planning through software and cutting-edge analytics.

The role

Our products use data, mathematical modelling, machine learning, and optimization to help our users get full transparency on their operations and make better decisions. We are looking for a talented Product Owner to join our team and lead one of our two products. 

As the Product Owner you will work closely with the Chief Product Officer, Head of Engineering, developers, data scientists, design, quality assurance, Head of Customers, and naturally our users to lead and ensure that the right features are built into our products, are robust and reliable for use in 24/7 live operations.

Portchain is an exciting fast-growth company where you will work with an incredible team on applications with a truly global impact.

Tasks and Responsibilities

  • Define product vision, road-map and growth opportunities
  • Lead the design, specifications and implementation of end-to-end features, through agile sprints 
  • Engage with users and key stakeholders on an ad-hoc basis primarily  to collect feedback 
  • Assess value, develop cases, and prioritize stories, epics and themes to ensure work focuses on those with maximum value that are aligned with product strategy
  • Lead the planning product release plans and set expectations for delivery of new functionalities. Ensure that the team always has an adequate amount of prior prepared tasks to work on
  • Ensure the application development is high quality 

Role Requirements

  • 3 years minimum of professional experience as a software product owner
  • Demonstrated excellent knowledge of technology, analytics and UX/UI across several domains
  • Strong analytical and problem solving skills paired with the ability to develop creative and efficient solutions to develop simple solutions to complex planning tasks
  • Strong collaboration mindset, working with the commercial team, engineering, design and analytics to gather perspectives, weigh solution options, and make a fact based decision swiftly and efficiently
  • Strong client skills being able to work and communicate with everyone from the COO to the daily planners in customer organizations
  • A deep understanding that we always have to navigate between the immediate impact of a feature and the technical debt incurred by its implementation. You are able to carefully weigh the pros and cons and make a choice in favour of the company and the team
  • Comfortable with rapid changes common in early-stage product development
  • Energized by complex problem solving and ability to think out of the box

Bonus skills (not required)

  • In-depth knowledge of Agile process and principles
  • Experience working with user focused B2B quality products
  • Experience with building products that leverage advanced analytics technologies such as machine learning and optimization
Share this job:
Data Science Course Mentor
python data science machine learning Nov 07 2019

Click here to apply

Who We Are
At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time.  We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do. 

The Position At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time.  We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do.  Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed technologists. As a Course Mentor, you will support students by acting as an advisor, counselor, and support system as they complete the course and land their first industry job. To achieve this, you will engage with students using the below range of approaches, known as Engagement Formats. Course Mentors are expected to provide support across all formats when needed. 

  • Mentor Sessions: Meet with students 1-on-1 in online video sessions to provide technical and professional support as the student progresses through the curriculum.
  • Group Sessions: Host online video sessions on topics of your expertise (in alignment with curriculum offerings) for groups of student seeking live support between mentor sessions. 
  • Grading: Reviewing student checkpoints submissions and delivering written feedback, including analysis of projects and portfolios. 
  • Technical Coaching: Provide in-demand support to technical questions and guidance requests that come to the Technical Coaching team through text and video in a timely manner. This team also provides the TA support for immersive programs. 
  • Assessments & Mock Interviews: Conduct 1-on-1 mock interviews and assessments via video calls and provide written feedback to students based on assessment rubrics. 

In addition to working directly with students, Course Mentors are expected to maintain an environment of feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment. Requirements

  • Minimum of 1 year professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
  • Proficiency in SQL, Python
  • Professional experience with Hadoop and Spark a plus
  • Excellent written and verbal communication
  • High level of empathy and people management skills
  • Must have a reliable, high-speed Internet connection

Benefits

  • This is a part-time role (10-25 hours a week)
  • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
  • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
  • Full access to all of Thinkful Courses for your continued learning
  • Grow as an Educator

Apply
If you are interested in this position please provide your resume and a cover letter explaining your interest in the role. Thinkful can only hire candidates who are eligible to work in the United States. We stand against any form of workplace harassment based on race, color, religion, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status. Thinkful provides equal employment opportunities to all employees and applicants. If you're talented and driven, please apply.

At this time, we are unable to consider applicants from the following states: Alaska, Delaware, Idaho, New Mexico, North Dakota, South Carolina, South Dakota, West Virginia, and Wyoming Click here to apply 

Share this job:
Lead Infrastructure Engineer
devops machine learning linux backend Nov 07 2019

At source{d} we are building the technology stack for the next generation of Machine Learning powered developer tools. We are an open-core company built around our Open Source projects. We have raised over ten million USD so far, and we are currently growing our team.

This position is open to those wishing to work remotely between the San Francisco and Moscow Timezones as well as those who want to work from our Madrid Office.

Role

This position is for a Lead Engineer in the Infrastructure team. The team has currently 3 members.

The Infrastructure team manages multiple clusters:

  • Pipeline clusters, built on bare metal servers at a hosting provider. It has more than 1000 threads, more than 6TB of RAM and 500TB of storage, and it is backed by CoreOS and Kubernetes. It has two main goals: on the one hand, it stores all the available public code in a distributed filesystem and on the other hand, it runs intensive computation jobs over the stored data on top of Apache Spark.
  • Machine Learning research cluster, built on bare metal servers at our Madrid office. It is backed by CoreOS and Kubernetes too and it has GPUs available on every server to run deep learning algorithms.
  • Multiple Google Kubernetes Engine clusters for public-facing services. All clusters are managed with Terraform, Kubernetes and Helm.

The team also maintains several services such as databases, queues, continuous integration, monitoring, logging, etc.

At source{d}, we care about Open Source, which is why we as the infrastructure team we contribute to projects such as Terraform & CoreOS and create our own. We maintain the official Terraform provider for Helm (terraform-provider-helm).

We are looking for someone with a background in Linux, networking and containers, passion for automation and experience working at scale. Finally, knowledge of at least one backend/scripting language who cares about best development practices.

Share this job:
Senior Software Engineer
aws devops java python javascript machine learning Nov 05 2019

DESCRIPTION:
Authority Partners is hiring experienced, passionate and self-driven Senior Software Engineer/Data Engineer to join our strong development teams. Make sure you don’t miss this call and the chance to join a team of top-notch players working with the most modern technologies. You will take on complex problems in a big data world and make sense of it through advanced data engineering and rendering tools, undertaking the full software lifecycle of design, implementation, and integration. Further, you will use leading-edge cloud computing technology, leverage Amazon Web Services to build AI infrastructure and redefine data interaction. If we sparked your interest and you are up for the challenge, read on and apply!

RESPONSIBILITIES:

  • Design and develop SDK framework to integrate AI product in the flow of work
  • Develop, improve, and maintain API and SDK to support access across any system
  • Produce unit, functional, integration and interoperability tests, including automating tests when possible
  • Collaborate with product team to translate requirements into future product development
  • Extensive experience with working with APIs
  • Leverage machine learning techniques to build systems which process and derive insights from billions of data points every day

REQUIREMENTS:

  • 5+ years of proven work experience in software development
  • Strong knowledge of JavaScript and at least one UI library/framework (e.g. React, Angular)
  • Minimum two years of experience with Amazon Web Services (Lambda, EC2, RDS, Elastic Beanstalk, S3, etc.), DevOps and CI/CD
  • Working knowledge of Python, Java, and/or Scala
  • Understanding of the technology and approaches for knowledge representation and semantic reasoning, e.g., semantic web technologies, graph databases, or deep relational data modeling
  • Knowledge of backend coding, API development, and database technologies
  • Understanding of data flows, data architecture, ETL and processing of structured and unstructured data
  • Experience with distributed software suites such as Apache Hadoop, Spark, Spark Streaming, Kafka, Storm, Zookeeper, Flume, Presto, Pig, Hive, MapReduce
  • Experience with agile (e.g., Scrum) or lean (e.g., Kanban) methodologies and practices
  • Minimum five years of experience building production quality cloud products
  • Proven leadership skills including: mentoring, coaching, and collaboration; able to inspire or mentor junior and senior team members.
  • Ability to design, architect and quickly complete projects with minimal supervision and direction
  • You have a passion for keeping up with the fast-emerging big data analytics technical landscape.
  • Experience developing and managing RESTful API applications with demonstrable production-scale experience
  • Experience developing cross-platform technologies and packaging as an SDK/library
  • Good understanding of system architecture and design and experience with large distributed systems
  • Demonstrated delivery of large-scale, initially-ambiguous projects
  • Expert knowledge in Machine Learning (Natural Language Processing ,VIsion , Classification , Search)
  • Knowledge of the software architecture and designing of systems at the enterprise level

EDUCATION AND EXPERIENCE:
Bachelor’s, Master’s or Ph.D. in Computer Science, Engineering, Mathematics or Physics, or equivalent industry experience

Share this job: