Remote Data Science Jobs

Last Week

Project Management Curriculum Writer
project-management agile kanban data science big data cloud Feb 22

Project Management Curriculum Writer

  • Education
  • Remote
  • Contract

Who We Are Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. We provide 1-on-1 learning through our network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development, data science, and design, with in-person communities in up-and-coming tech hubs around the U.S. To join the Thinkful network visit thinkful.com.

Job Description Thinkful is launching a new Technical Project Management program which aims to be the best in-class remote, part-time Technical Project Management program offered today. As part of this effort, we're looking for a Technical Project Management subject matter expert to join us in executing on our content roadmap for this exciting new program. You will be creating the backbone of a new program that propels people from a background in academia and the sciences into an impactful career as Technical Project Manager. You'll produce written content, lesson plans including instructor notes and student activity descriptions, presentation decks, assessments, learning objectives and written content, all to support our students as they learn the core skills of data science. Your work product will be extremely impactful, as it forms the core asset around which the daily experience of our students will revolve. 

Responsibilities

  • Consistently deliver content that meets spec and is on time to support our program launch roadmap.
  • Create daily lesson plans consisting of 
  • Presentation decks that instructors use to lecture students on a given learning objective.
  • Instructor notes that instructors use alongside 
  • Activity descriptions — these are notes describing tasks students complete together in order to advance the learning objective in a given lecture.
  • Creates curriculum checkpoint content on specific learning objectives. In addition to the in-class experience, our students also spend time reading and completing tasks for a written curriculum hosted on the Thinkful platform. 
  • Creates code assets where necessary to support lesson plans, student activities, and written curriculum content.
  • Iterates on deliverables based on user feedback

Requirements

  • 3+ years of hands-on Technical Project Management industry experience 
  • Demonstrated subject matter expert in Technical Project Management 
  • Managing projects using Agile, Kanban and six Sigma methodologies
  • Work on multiple projects, all complexity levels, in an environment with changing priorities
  • Change management expertise 
  • Web application development experience 
  • Running large scale big data projects and or AWS cloud based projects
  • Collaborative.You enjoy partnering with people and have excellent project management skills and follow through
  • Excellent writing skills. You've got a gift for writing about complicated concepts in a beginner-friendly way. You can produce high-quality prose as well as high-quality presentations.

Compensation and Benefit

  • Contract position with a collaborative team
  • Ability to work remotely with flexible hours 
  • Access to all available course curriculum for personal use
  • Membership to a global community of over 500 Software Engineers, Developers, and Data Scientists who, like you, want to keep their skills sharp and help learners break into the industry
Share this job:

This Month

Data Scientist, Healthcare Policy Research
r python machine-learning healthcare data science machine learning Feb 19

We are looking for data scientists with policy research experience to perform data processing and analysis tasks, such as monitoring data quality, applying statistical and data science methods, and creating data visualizations. In this role you will work on multi-disciplinary teams supporting program evaluation and data analytics to inform policy and decision makers.

Responsibilities

  • Answering research questions or building solutions that involve linking health or healthcare data to other administrative data.
  • Designing, planning, and implementing the data science workflow on tasks and projects, involving descriptive statistics, machine learning or statistical analysis, data visualizations, and diagnostics using programming languages such as R or Python
  • Communicating results to collaborative project teams using data visualizations and presentations via tools such as notebooks (e.g. Jupyter) or interactive BI dashboards
  • Developing and maintaining documentation using Atlassian Confluence and Jira
  • Implementing quality assurance practices such as version control and testing

Requirements 

  • Master’s degree in Statistics, Data Science, Math, Computer Science, Social Science, or related field of study
  • Eight (8) years of experience 
  • Demonstrable enthusiasm for applying data science and statistics to social impact projects in academic, extra-curricular, and/or professional settings
  • Demonstrable skills in R or Python to manipulate data, conduct analyses, and create data visualizations
  • Ability to version code using GitExperience with healthcare claims and administrative data
  • Ability and desire to work independently as part of remote, interdisciplinary teams
  • Strong oral and written communication skills
Share this job:
Cloud Architect for Enterprise AI - Remote
Dataiku  
cloud data science big data linux aws azure Feb 18
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Enterprise AI Platform (Dataiku DSS)  to an ever growing customer base. 

As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build and run their Data Science and AI Enterprise Platforms.

This role requires adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines,  and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.

The position can be based remotely.

Responsibilities

  • Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
  • Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
  • Deploy Dataiku DSS in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
  • Design and build reference architectures, howtos, scripts and various helpers  to make the deployment and maintenance of Dataiku DSS smooth and easy
  • Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
  • Provide advanced support for strategic customers on deployment and scalability issues
  • Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
  • Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform

Requirements

  • Strong Linux system administration experience
  • Grit when faced with technical issues. You don’t rest until you understand why it does not work.
  • Comfort and confidence in client-facing interactions
  • Ability to work both pre and post sale
  • Experience with cloud based services like AWS, Azure and GCP
  • Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
  • Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
  • Some experience with Python
  • Familiarity with Ansible or other application deployment tools

Bonus points for any of these

  • Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
  • Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
  • Some knowledge in data science and/or machine learning
  • Some knowledge of Java

Benefits

  • Work on the newest, best, big data technologies for a unicorn startup
  • Consult on AI infrastructure for some of the largest companies in the world
  • Equity
  • Opportunity for international exchange to another Dataiku office
  • Attend and present at big data conferences
  • Startup atmosphere: Free foods and drinks, international atmosphere, general good times and friendly people


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Python Engineer
python cython tensorflow keras pytorch c Feb 17

Description

We are looking for a Python-focused software engineer to build and enhance our existing APIs and integrations with the Scientific Python ecosystem. TileDB’s Python API (https://github.com/TileDB-Inc/TileDB-Py) wraps the TileDB core C API, and integrates closely with NumPy to provide zero-copy data access. You will build and enhance the Python API through interfacing with the core library; build new integrations with data science, scientific, and machine learning libraries; and engage with the community and customers to create value through the use of TileDB.

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with TileDB, the TileDB-Py API and the TileDB-Dask integration. After 30 days, you will be fully integrated in our team. You’ll be an active contributor and maintainer of the TileDB-Py project, and ready to start designing and implementing new features, as well as engaging with the Python and Data Science community.

Requirements

  • 5+ years of experience as a software engineer
  • Expertise in Python and experience with NumPy
  • Experience interfacing with the CPython API, and Cython or pybind11
  • Experience with Python packaging, including binary distribution
  • Experience with C, C++, Rust, or a similar systems-level language
  • Distributed computation with Dask, Spark, or similar distributed computation system
  • Experience with a machine learning library (e.g. scikit-learn, TensorFlow, Keras, PyTorch, Theano)
  • Experience with Amazon Web Services or a similar cloud platform
  • Experience with dataframe-focused systems (e.g. Arrow, Pandas, data.frame, Vaex)
  • Experience with technical data formats such as (e.g. Parquet, HDF5, VCF, DICOM, GeoTIFF)
  • Experience with other technical computing systems (e.g. R, MATLAB, Julia)

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Data Infrastructure Engineer
Tesorio  
data science machine learning finance Feb 14
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Infrastructure Engineer to work on our Data Science team.

Company Overview

Tesorio is a high-growth, early-stage startup backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights).

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo.
  • Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Senior Software Engineer, Test Infrastructure
senior javascript data science machine learning docker testing Feb 13
About Labelbox
Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Andreessen Horowitz, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

As a Senior Software Engineer in Testing Infrastructure you will be responsible for building and maintaining our testing and automation infrastructure, test frameworks, tools, and documentation. At Labelbox engineers are responsible for writing automated tests for their features, and it will be your responsibility to build reliable infrastructure to support their efforts. 

Responsibilities

  • Design, implement and maintain reliable testing infrastructure for unit testing, component testing, integration testing, E2E API and UI testing, and load testing
  • Build and maintain reliable testing environments for our integration, E2E and load testing jobs
  • Integrate our testing infrastructure with our CI/CD pipeline to ensure automated kickoff of tests
  • Guide our engineering team on testing best practices and monitor the reliability and stability of our testing suite
  • When implementing new testing infrastructure and/or adopting new tools, write sample tests and documentation for our engineering team to hit the ground running with the new infrastructure

Requirements

  • 5+ years of experience developing testing infrastructure for web applications in a production environment
  • Experience with web technologies including: React, Redux, Javascript, Typescript, GraphQL, Node, REST, SQL
  • Experience with Unit Testing frameworks such as Jest, Mocha, and/or Jasmine
  • Experience with E2E UI test frameworks such as Cypress, Selenium, and/or Puppeteer
  • Experience writing E2E API tests with frameworks such as Cypress and/or Postman/Newman
  • Experience with Load Testing frameworks such as OctoPerf, JMeter, and/or Gatling
  • Experience integrating with CI/CD platforms and tools such as Codefresh, CircleCI, TravisCI, or Jenkins and Bazel
  • Experience integrating tools to measure code coverage across the different types of testing
  • Experience with Docker and Kubernetes
  • Experience with GraphQL and building testing infrastructure around it
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Data Scientist
python data science Feb 05

What is Pathrise?

Pathrise (YC W18) is an online program for tech professionals that provides 1-on-1 mentorship, training and advice to help you land your next job. On top of that, we're built around aligned incentives. You only pay if you succeed in getting hired and start work at a high-paying job first.

Everyday we are expanding our team and our services. We are looking for sharp, scrappy and fun individuals who are ready to jump in (head first) into a new role with us. We are a small team and we love working together to improve our fellows chances of getting the job of their dreams! If this sounds like something you'd be interested in we want to talk to you.

Our Mission

We seek to uplift job seekers in their careers and help them fulfill their hopes, ambitions and livelihoods. Read more about why we’re driven to do this in our manifesto.

In this role, you will create a framework for how we utilize our own data. If you are someone comfortable with qualitative data and can see the amazing potential we have to be a forerunner in this new job seekers market then this could be the perfect role for you.

In order to be effective in this role, you must have a genuine interest in education and technology. Since you will be involved in all phases of coursework from research, development, design and feedback we are looking for someone who is not only passionate but also in love with our Mission of “uplifting undervalued students and tech professional in their early careers.”

This position is ideal for someone with a passion for data science and education, who is entrepreneurial and wants to join a fast-growing startup that's helping the next generation of data scientists!

Qualifications

  • 0-3 years in data science
  • Excellent communication skills, ability to understand customer needs and provide valuable recommendations
  • Strong Python and SQL skills
  • Able to effectively synthesize, visualize, and communicate your ideas to others
  • Familiar with key data engineering concepts
  • Experience with data visualization

Benefits and perks

  • Great health, dental and vision benefits
  • Free daily catered lunches and snacks
  • Commuting costs covered
  • Flexible PTO
  • Ability to grow in your career and make a difference to individuals and society

We do not discriminate on the basis of race, religion, sex, gender identity, sexual orientation, age, disability, national origin, veteran status or any other basis covered by law. If you need assistance or an accommodation due to a disability, please let us know.

Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Don't see your role here?
data science machine learning computer vision healthcare Feb 03
Don't quite see the role you're looking for? Labelbox is growing incredibly fast and we are posting new roles frequently. Send us your resume so we can keep you in the loop as we grow.


About Labelbox

Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Gradient Ventures (Google’s AI-focused venture fund), Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:
Machine Learning Platform Engineer
Tesorio  
machine learning data science finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Machine Learning Platform Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo. Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Data Engineer
Tesorio  
python data science machine learning finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • Extract data from 3rd party databases and transform into useable outputs for the Product and Data Science teams
  • Work with Software Engineers and Machine Learning Engineers, call out risks, performance bottlenecks
  • Ensure data pipelines are robust, fast, secure and scalable
  • Use the right tool for the job to make data available, whether that is on the database or in code
  • Own data quality and pipeline uptime. Plan for failure

Required Skills

  • Experience scaling, securing, snapshotting, optimizing schemas and performance tuning relational and document data stores
  • Experience building ETL pipelines using workflow management tools like Argo, Airflow or Kubeflow on Kubernetes
  • Experience implementing data layer APIs using ORMs such as SQLAlchemy and schema change management using tools like Alembic
  • Fluent in Python and experience containerizing their code for deployment.
  • Experience following security best practices like encryption at rest and flight, data governance and cataloging
  • Understanding the importance of picking the right data store for the job. (columnar, logging, OLAP, OLTP etc.,)

Nice to Have Skills

  • Exposure to machine learning
  • Experience with on-prem deployments
Share this job:

This Year

Software Engineer in Test
testing cypress automated-tests circleci javascript html Jan 27

Our homes are our most valuable asset and also the most difficult to buy and sell. Knock is on a mission to make trading in your house as simple and certain as trading in your car. Started by founding team members of Trulia.com (NYSE: TRLA, acquired by Zillow for $3.5B), Knock is an online home trade-in platform that uses data science to price homes accurately, technology to sell them quickly and a dedicated team of professionals to guide you every step of the way. We share the same top-tier investors as iconic brands like Netflix, Tivo, Match, HomeAway and Houzz.


We are seeking a passionate Software Engineer in Test to help us build our QA & automation processes, procedures, and tools. You will be responsible for integration and regression testing our frontend, mobile, and backend applications, and will be an advocate for a modern DevOps-first automation-rich development and release pipeline. We are looking for someone who is passionate about creating great products and making the world amazing for homebuyers.


At Knock, we have fun, we move fast, we support and celebrate our fellow teammates, and we live by our POPSICLE values.

As a Software Engineer in Test you will: 

  • Lead and create robust test documentation including test plans, test cases, and test result analysis.
  • Review functional and design specifications to ensure full understanding of deliverables.
  • Build, run and maintain automated functional, integration and regression tests to help improve software quality.
  • Build and maintain tooling to facilitate testing in the CI/CD pipelines.
  • Design metrics for performance, reliability, stability and compatibility with other systems.
  • Work deeply with our in-house and field operations team to identify, document, and regression test issues as they occur in the wild
  • Collaborate closely and daily with the design, product, engineering teams and other key teams at Knock.

We’re looking for Knockstars who have: 

  • Must be U.S. based.
  • B.S. in Computer Science or equivalent experience.
  • Minimum of 5 years of experience as a software quality assurance engineer.
  • Experience in developing test strategies, test plans, test cases, and analyzing test results.
  • Experience in building automated functional, integration and regression tests.
  • Experience with testing automation frameworks.
  • Experience in building automated UI testing for both web and mobile.
  • Proven ability to translate functional requirements and use cases into working test plans and test cases.
  • A strong customer-first mindset and data-driven approach to their work
  • Programming proficiency in HTML, JavaScript, and other scripted or interpreted languages.
  • Knowledge of SQL (MySQL or Postgres).
  • Proven success working remotely in prior positions & are experienced working with a distributed, national team 

Bonus points for:

  • Team and/or technical leadership experience.
  • Development and test experience in Node.js and React Native.
  • Experience with native Android and iOS automated test frameworks.
  • Experience with Docker-based ecosystems and container orchestration systems such as Amazon ECS or Kubernetes.

What We Can Offer You:

  • An amazing opportunity to be an integral part of building the next multi-billion dollar consumer brand around the single largest purchase of our lives.
  • Talented, passionate and mission-driven peers disrupting the status quo.
  • Competitive cash, full medical, dental, vision benefits, 401k, flexible work schedule, unlimited vacation (2 weeks mandatory) and sick time.
  • Flexibility to live and work anywhere within the United States. As we are a distributed company and engineering team, we are open to any U.S. location for this role.

We have offices in New York, San Francisco, Atlanta, Charlotte, Raleigh, Dallas-Fort Worth, Phoenix, and Denver with more on the way. In fact, we are proud to be a distributed company with employees in 21 different states. This is an amazing opportunity to be an integral part of building a multi-billion dollar consumer brand in an industry that is long overdue for a new way of doing things. You will be working with a passionate, mission-driven team that is disrupting the status quo.


Knock is an Equal Opportunity Employer.


Please no recruitment firm or agency inquiries, you will not receive a reply from us.

Share this job:
Senior Data Scientist
python aws tensorflow pytorch scikit-learn senior Jan 17

XOi Technologies is changing the way field service companies capture data, create efficiencies, collaborate with their technicians, and drive additional revenue through the use of the XOi Vision platform. Our cloud-based mobile application is powered by a robust set of machine learning capabilities to drive behaviors and create a seamless experience for our users.

We are a group of talented and passionate engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and solutions to the challenges they face in their workplace. Problems and solutions typically center around aspects of the Vision platform such as image recognition, natural language processing, and content recommendation.

As a Senior Data Scientist, you will build machine learning products to help automate workflows and provide valuable assistance to our customers. You’ll have access to the right tools for the job, large amounts of quality data, and support from leadership that understands the full data science lifecycle. You’ll build models using technologies such as Python, Tensorflow, and Docker.

Responsibilities:

  • Interpret and understand business needs/market opportunities, and translate those into production analytics.
  • Select appropriate technologies and algorithms for given use cases.
  • Work directly with product managers and engineering teams to tightly integrate new analytic capabilities.
  • Prepare reports, visualizations, and other documentation on the status, operation and maintenance of the analytics you create.
  • Stay current on relevant machine learning and data science practices, and apply those to existing problem sets.

Requirements: 

  • Excellent understanding of machine learning algorithms, processes, tools, and platforms including: CNN, RNN, NLP, Tensorflow, PyTorch, etc.
  • Proficient with the following (or comparable): Linux, Python, scikit-learn, NumPy, pandas, spaCy.
  • Applied experience with machine learning on large datasets/sparse data with structured and unstructured data.
  • Experience with deep learning techniques and their optimizations for efficient implementation.
  • Great communication skills, ability to explain predictive analytics to non-technical audiences
  • Bachelor’s in Math, Engineering, or Computer Science (or technical degree with commensurate industry experience).
  • 3+ years of relevant work experience in data science/machine learning.

Nice to Have:

  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • DevOps experience with continuous integration/continuous deployment.
  • Experience in software engineering best practices, principles, and code design concepts.
  • Speech-to-text or OCR expertise.

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Senior Data Scientist / Backend Engineer
komoot  
aws data-science machine-learning kotlin python backend Jan 16

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and, with more than 8.5 million users and 50,000 five-star reviews - komoot is on its way to become one of the most popular cycling and hiking apps. Join our fully remote team of 60+ people and change the way people explore!


To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services. With over 8 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience.

We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.

What you will do

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house, co - working of your choice, our HQ in Berlin/ Potsdam or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like Pandas, Numpy, Jupyter Notebooks, Seaborn, Scikit-Learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Then send us the following:

  • Your CV in English highlighting your most relevant experience
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Software Engineer
python-3.x flask microservices data science machine learning saas Jan 14

Carbon Relay is a world-class team of software engineers, data scientists and devops experts focused on harnessing the power of machine learning to help organizations achieve the most with their Kubernetes-based applications. With our innovative optimization platform, we help boost application performance while keeping costs down.

We’re looking for a Software Engineer to work on the next generation of K8s optimization products that bridge the gap between data science, engineering and DevOps. You’ll be working closely with our engineering and data science teams, helping bring products from R&D into production and making our products scale efficiently. 

Responsibilities

  • Design and implement features as part of SaaS-based microservices platform
  • Contribute to and enhance internal APIs and infrastructure
  • Work alongside our data science team to integrate machine learning into our products

Required qualifications

  • 1-3 years of software engineering experience
  • Experience with Python
  • Experience shipping and maintaining software products
  • Experience working with Git and GitHub

Preferred qualifications

  • Familiarity with Kubernetes and Containerization 
  • Experience with GCP/GKE
  • Experience developing SaaS applications / microservice architectures

Why join Carbon Relay:

  • Competitive salary
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!

Overview

Responsibilities

Share this job:
Data Privacy Analyst
Anonos  
project-management sql r data-science linux testing Jan 11

Updated 1/11/20

Data Privacy Analyst

Anonos is a fast-growing start-up in the data privacy software space looking for a Data Privacy Analyst,  who will report to our Chief Data Strategist. This is a remote/work from home position. If you want to be part of an exciting period in the company’s development and growth and think you meet most of the criteria below, we want to hear from you!

Please do not contact us if you are an agency or recruiter. We conduct our own in-house recruiting.

Why Anonos?

Our Co-Founders have been business partners for over 19 years and have an extremely successful track record. They previously built a company that was recognized as one of the fastest growing software companies on the Inc 500® for two years in a row and which was ultimately acquired by Nasdaq OMX. They have built a solid and cohesive team at Anonos which works efficiently, acts quickly, and values energy and focus.

We just received our 7th patent for our foundational technology (with another 60+ pending), and Gartner named us a “Cool Vendor” due to our innovation and uniqueness in the marketplace.

Privacy is one of the hottest technology segments in the market. We are launched, funded, have customers, an established partner channel, and are now ready for fast growth in 2020.

If you thrive working with bright co-workers and the latest technologies, like to contribute and be challenged at the same time, and are comfortable working remotely, we should be a great fit for you.

About Our Product and Solutions

Anonos’ patented BigPrivacy® technology enables compliant data innovation, analytics, use, sharing, combining, and re-linking by technically enforcing automated privacy and security controls in compliance with internal restrictions and external regulatory requirements.

We Value Team Members Who Are:

  • Smart: You have outstanding intellectual ability, and a proven track record of quickly learning new skills, concepts, and technologies
  • Great Communicators: You are an excellent written and verbal communicator. We are looking for someone who can interact effectively with the C-suite, the newest software engineer, and everyone in between
  • Leaders: You can rally team members around a challenge
  • Tech Savvy: You understand how information technologies work – and work together. You have experience with data use-cases including analytics, data processing, or similar fields
  • Entrepreneurial: You are excited about creating new products at a startup that is re-defining the data privacy landscape
  • Strategic and Analytical: You are excited about the prospect of constantly learning about our customers, our product and the market and thinking about the implications for our business
  • Results-Oriented: You deliver, on-time. You anticipate obstacles and adjust when unexpected challenges arise

Responsibilities

This role will initially be focused on supporting potential customer Proofs of Concept/Proofs of Value/Pilots as part of our Sales Process. Successful candidates will have at least 5 years of professional experience, a math, science or engineering degree and be able to demonstrate a combination of software technical acumen, project management, customer engagement and data analytics experience.

Sales Demonstrations / Proofs of Concept / Proofs of Value 


  • Support product demonstrations for potential customers
  • Prepare synthetic data sets (schema definition, data generation, data wrangling, QA)
  • Support translation of client uses case into BigPrivacy configurations and data process flows
  • Support 3-hour technical demonstration sessions – step by step configuration of all BigPrivacy software to meet customer use-case requirements

Pilot Projects and New Client Implementations

  • Support customer onboarding
  • Scoping and defining requirements of client IT environment vs software requirements
  • Support client install and first line troubleshooting; coordinate problem resolution with the development team when necessary
  • Support distribution partner “on-site” project managers to ensure project and technical issues are addressed
  • Provide software application training 

Other Activities

  • Develop an extensive working knowledge of all Anonos products
  • Training of sales partner project management and technical staff as needed
  • Participate with internal user-acceptance testing prior to new releases
  • Create and edit product and training documentation
  • Modify, update and improve existing demonstrations and create new ones

Desired Skills

Data Analytics/Data Engineering

  • Hands-on experience with data analytics – methods, use cases, challenges
  • Intermediate Excel (pivot tables, functions, data formatting, dates)
  • Data Wrangling, Featuring Engineering
  • Pentaho Data Integration or other ETL tools a significant plus
  • Data Science skills a significant plus (R, basic ML models, statistical concepts)


IT/Development Tools

  • Basic familiarity with several of the following: all or most a significant plus
  • Linux – desktop and basic command line, including vim (text editor), SSH
  • GitHub, ZenHub (or comparable)
  • SQL
  • Docker, Kubernetes
  • Hadoop, MapR, HDFS, Spark or other BigData tools
  • Pseudonymisation, Tokenization, Encryption

    General Business
  • Experience with Data Privacy (GDPR, CCPA, HIPPA, etc.) a significant plus
  • Experience with Regulatory/Standards compliance of any kind financial, ISO, health-care, environmental, quality, nuclear, etc. a significant plus
  • Ability to lead meetings and training webinars with customer mid-level technical, professional and management staff
  • (Technical/IT) Project Management experience
  • Ability to relate to customers and staff in a professional and courteous manner
  • Exceptional phone support and software/hardware troubleshooting skills
  • Superior verbal and written communication skills
  • Ability and desire to work 100% remote, but still highly collaborative; self-starter
  • Interest in working for early stage startup – risks, flexibility, adaptability

If this sounds like the right role and the right environment for you, we welcome your application.

Learn more about Anonos at www.anonos.com

Share this job:
Data Science Course Mentor
python sql hadoop data science machine learning Jan 08

Apply here


Data Science Course Mentor

  • Mentorship
  • Remote
  • Part time


Who We Are
At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time. 

We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do. 

The Position
Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed technologists. As a Course Mentor, you will support students by acting as an advisor, counselor, and support system as they complete the course and land their first industry job. To achieve this, you will engage with students using the below range of approaches, known as Engagement Formats. Course Mentors are expected to provide support across all formats when needed. 

  • Mentor Sessions: Meet with students 1-on-1 in online video sessions to provide technical and professional support as the student progresses through the curriculum.
  • Group Sessions: Host online video sessions on topics of your expertise (in alignment with curriculum offerings) for groups of student seeking live support between mentor sessions. 
  • Grading: Reviewing student checkpoints submissions and delivering written feedback, including analysis of projects and portfolios. 
  • Technical Coaching: Provide in-demand support to technical questions and guidance requests that come to the Technical Coaching team through text and video in a timely manner. This team also provides the TA support for immersive programs. 
  • Assessments & Mock Interviews: Conduct 1-on-1 mock interviews and assessments via video calls and provide written feedback to students based on assessment rubrics. 

In addition to working directly with students, Course Mentors are expected to maintain an environment of feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment.

Requirements
  • Minimum of 3 years professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
  • Proficiency in SQL, Python
  • Professional experience with Hadoop and Spark a plus
  • Excellent written and verbal communication
  • High level of empathy and people management skills
  • Must have a reliable, high-speed Internet connection

Benefits
  • This is a part-time role (10-25 hours a week)
  • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
  • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
  • Full access to all of Thinkful Courses for your continued learning
  • Grow as an Educator

Apply
If you are interested in this position please provide your resume and a cover letter explaining your interest in the role.

Thinkful can only hire candidates who are eligible to work in the United States.

We stand against any form of workplace harassment based on race, color, religion, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status. Thinkful provides equal employment opportunities to all employees and applicants. If you're talented and driven, please apply.

At this time, we are unable to consider applicants from the following states: Alaska, Delaware, Idaho, New Mexico, North Dakota, South Carolina, South Dakota, West Virginia, and Wyoming

Apply here
Share this job:
Data Scientist
python sql spacy powerbi github data science Jan 07

Position Overview:

Our tech team is looking for a data scientist with excellent communication skills and demonstrated experience writing idiomatic Python code. You’re comfortable fielding a question from a non-technical stakeholder about our dataset and then putting together a data visualization with the answer. You’re also ready to troubleshoot a bug in one of our existing ETL scripts and make a pull request with a detailed write-up of the fix. We use Google BigQuery, PowerBI, spaCy, pandas, Airflow, Docker.

The right candidate has experience with the Python data science stack as well as one or more BI tools such as Tableau or PowerBI, and is able to juggle competing priorities with finesse. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry.

Key Responsibilities:

  • Query and transform data with Standard SQL and pandas
  • Build BI reports to answer questions of our data
  • Work with our data engineering team to munge large datasets using our existing data pipelines for our existing BI reports

Qualifications & Skills:

REQUIRED:

  • 1-3 years of experience working full-time with Python for data science; we use pandas, scikit-learn, and numpy
  • Intermediate-to-expert level SQL experience; we use Standard SQL
  • Experience with one or more natural language processing frameworks; we use spaCy.
  • Excellent communication skills and demonstrated ability to collaborate with non-technical stakeholders to create compelling answers to tough data questions
  • Intermediate-to-expert level skills with one or more interactive business intelligence tools like PowerBI or Tableau

PREFERRED:

  • Experience with CI/CD tools like CircleCI; we use GitHub Actions
  • Experience with Docker
  • Experience with Airflow

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Data Engineer
python sql google-bigquery pandas airflow data science Jan 06

Position Overview:

The ideal candidate is an experienced data engineer. You will help us develop and maintain our data pipelines, built with Python, Standard SQL, pandas, and Airflow within Google Cloud Platform. We are in a transitional phase of refactoring our legacy Python data transformation scripts into iterable Airflow DAGs and developing CI/CD processes around these data transformations. If that sounds exciting to you, you’ll love this job. You will be expected to build scalable data ingress and egress pipelines across data storage products, deploy new ETL pipelines and diagnose, troubleshoot and improve existing data architecture. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry. 

Key Responsibilities:

  • Build and maintain ETL processes with our stack: Airflow, Standard SQL, pandas, spaCy, and Google Cloud. 
  • Write efficient, scalable code to munge, clean, and derive intelligence from our dataPage Break

Qualifications & Skills: 

REQUIRED:

  • 1-3 years experience in a data-oriented Python role, including use of:
    • Google Cloud Platform (GCE, GBQ, Cloud Composer, GKE)
    • Airflow
    • CI/CD like: GitHub Actions or CircleCI 
    • Docker
  • Fluency in the core tenants of the Python data science stack: SQL, pandas, scikit-learn, etc. 
  • Familiarity with modern NLP systems and processes, ideally spaCy

PREFERRED:

  • Demonstrated ability to collaborate effectively with non-technical stakeholders
  • Experience scaling data processes with Kubernetes 
  • Experience with survey and/or social media data
  • Experience preparing data for one or more interactive data visualization tools like PowerBI or Tableau

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Senior Fullstack Software Engineer
senior javascript data science machine learning frontend testing Jan 06
About Labelbox

Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

Responsibilities

  • Strong understanding of Javascript with an interest in using Typescript
  • Experience managing/scaling SQL databases, orchestrating migrations, and disaster recovery
  • Experience working with Redux and architecting large single page applications
  • Experience and interest in frontend testing
  • Optimizing data models and database configurations for both ease-of-use and performant response times
  • Building new features and resolvers in our GraphQL API with Node.JS

Follow-on Responsibilities

  • Experience with SQL databases
  • Experience optimizing web traffic
  • Experience with RabbitMQ (or other message broker) and Redis
  • Experience constructing and monitoring ETL pipelines
  • Experience with Logstash / Elasticsearch
  • Familiarity with Kubernetes and Docker

Requirements

  • 4+ years of experience building data rich frontend web applications
  • A bachelor’s degree (or equivalent) in computer science or a related field.
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
R Engineer
r cpp rcpp c data science cloud Jan 04

Description

We are looking for an R developer to build and maintain our R interface to the TileDB array storage engine and hosted cloud service. R is a very popular programming language used by numerous developers in the Bio and Finance communities, among many others. The TileDB core library is built in C++ for supreme performance, and we built an R API so that it can be used by the R community. We are looking for a person to improve our R API and expand it with computational capabilities (e.g., integration with dplyr) and domain specific software (e.g., Bioconductor).

As an R Engineer, you will be responsible for

  • Leading the development of TileDB-R (TileDB R API)
  • Building out features to better integrate TileDB-R with commonly used R data science libraries
  • Troubleshooting and fixing bugs reported by users
  • Building and developing use cases around using TileDB in the R ecosystem

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with the core TileDB storage engine and the existing TileDB-R API. For your next 30 days, you will start contributing to TileDB-R, adding missing core TileDB functions and improving the performance of the existing ones. After 60 days, you will be fully integrated in our team. You will start researching R use cases and exploring further integrations with popular R packages.

Requirements

  • Experience developing and maintaining R libraries
  • Experience using a low-level R API for a C library
  • Experience using Rcpp / C++ for R extensions
  • Familiarity with S3 / S4 OO frameworks
  • Familiarity with R packaging, distribution with CRAN
  • Experience extending / building upon data.frame / data.table API’s
  • Domain knowledge in using R within the fields of finance or bioinformatics

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Data Engineer: AI/ML
pytorch python machine-learning fast-ai pipeline ruby Dec 26 2019

Roadtrippers Place Lab powers the geo-data for Roadtrippers consumer web and mobile applications and the underlying B2B services.  Roadtrippers Place Lab is looking for a detail-oriented problem solver to join the team as a Data Engineer focusing on all things geo-data. This engineer will share the responsibility of data quality and fidelity with our engineering, data science, and data quality teams by developing better ways to evaluate, audit, augment, and ingest data about places.

Responsibilities

  • Work with the AI/ML research team in developing new models and pipelines to derive insights and improve our data quality
  • Bridge AI/ML research to assist in building production pipelines and improve the efficiency transitioning from development
  • Own production AI/ML pipelines including revisions, optimizations and detecting root-cause anomalies
  • Assist in planning and implementation of data ingestion, sourcing, and automation projects
  • Communicate with Engineering and Product teams about requirements and opportunities as it relates to new data and schema updates
  • Contribute to application development for data initiatives 
  • Identify, participate and implement initiatives for continuous improvement of data ingestion, quality, and processes.
  • Manually manipulate data when necessary, while learning and applying these needs to scale future projects

Qualifications

  • Experience with Data Science/ML/AI
  • Experience working with geospatial data is a huge plus
  • Development experience with Python
  • Knowledge of SQL (ideally Postgres), Elasticsearch and schemaless databases
  • Experience with ETL and implementing Data Pipeline architecture 
  • AWS and SageMaker experience is particularly valuable 
  • Big data experience is ideal 
  • Understanding of web application architecture, Ruby and Ruby on Rails experience is a plus
  • A "do what it takes" attitude and a passion for great user experience
  • Strong communication skills and experience working with highly technical teams
  • Passion for identifying and solving problems
  • Comfort in a fast-paced, highly-dynamic environment with multiple stakeholders

We strongly believe in the value of growing a diverse team and encourage people of all backgrounds, genders, ethnicities, abilities, and sexual orientations to apply.

Share this job:
Senior Backend Engineer - Content and Metadata
Scribd  
backend senior cs data science Dec 25 2019
Scribd
/skribbed/ (n).
1. a tech company changing the way the world reads
2. a membership that gives users access to the world’s largest online library of books, audiobooks, sheet music, news, and magazines

We value trying new things, craftsmanship, being an open book, and the people that make our team great.
Join us and build something meaningful.

Our team
The Content Engineering team is broadly responsible for catalog management and content metadata at Scribd. Supplying supplementary data to ebook and audiobook pages? That's us. Ensuring that all user-uploaded documents are useful, accessible, and legally available? That's us. Creating pipelines that build clean and well-structured data for Search, Recommendations, and Data Science to build amazing features from? That's us. Analyzing user reading activity and translating them into publisher payouts? That's us. We're a spoke within Scribd, connecting many engineering, product, and publisher-focused teams through data.

The majority of the team is based in San Francisco but there's a strong and growing remote contingent as well (much like Scribd overall). We use tools that emphasize asynchronous communication (Slack, Gitlab, Jira, Google Docs) and are ready and able to jump on a video call when text doesn't cut it. Regardless of the medium, solid communication skills are a must. We operate with autonomy (developers closest to the code will make the most well-informed decisions) while holding ourselves and each other accountable for using good judgement when faced with each day's unique challenges.

Our technical work is divided between our user-facing Rails application and our offline data warehouse (where much of our processing is done on top of Spark). Many of the systems we're responsible for - document spam detection, document copyright detection, topic extraction and classification, sitemap generation, and translating user activity into publisher payouts, just to name a few - span both environments, so engineers regularly work within both. Though the tech stacks differ between environments, the engineering work in both is the same - create data pipelines to ingest, process, clean, and layout the metadata coming from publishers and other external sources, as well as create new metadata from our vast content base.

The role
As a Senior Backend Engineer, you've probably seen quite a bit in your career, and we want to leverage all of it. Software development will be your primary function, but we'll expect you to contribute in a number of ways, including advising on technical design, reviewing code, participating in interviews, and mentoring less experienced engineers

When you are doing software development, you'll be doing more than just coding a ticket handed to you. You'll own the implementation, delivery, and operation of systems, end-to-end. You'll consider testability, upgradeability, scale, and observability throughout the development process. You'll regularly have one or two engineers following your lead, whose output you will be responsible for. On Content Engineering, a Senior Backend Engineer is a leader.

If you've been a senior engineer for a while and have been more focused on architectural concerns, cross-team initiatives, and other strategic endeavors, we have a place for you as well. Just know that this is a code-heavy role

Office or remote?
We have a wonderful new office in San Francisco, as well as smaller offices in Toronto and New York. If you live close to one of those you'll find great people and a nice work environment.

If you don't live near one of those offices, we'd still love to have you! Scribd is expanding its remote workforce with the goal of finding the best employees regardless of location. Being a remote employee means providing your own productive work environment. Being a remote employee means providing your own productive work environment, and everything else is pretty similar to being an office employee. We expect remote employees to have solid communication skills, good judgement, and demonstrable personal responsibility. We also expect the same from our in-office employees, so you'll be in good company.

Nitpicky requirements
Backend Engineers on Content Engineering typically have:
• 8+ years of experience as a professional software engineer
• Experience or a strong interest in backend systems and data pipelines
• Experience working with systems at Scribd's current scale
• Bachelor’s in CS or equivalent professional experience

We present these in order to detail the picture of what we're looking for. Of course, every engineer brings something unique to the table, and we like nothing more than finding a diamond in the rough.

Required Questions
• What’s your favorite book that you’ve read recently?
• In one sentence, why does this role appeal to you?
Why we work here
• We are located in downtown San Francisco, within walking distance of Caltrain and BART
• Health benefits: 100% employer covered Medical/Dental/Vision for regular, full-time employees
• Generous PTO policy plus we close for the last week in December
• 401k matching
• Paid Parental leave
• Monthly wellness budget and fully paid membership to our onsite fitness facility
• Professional development: generous annual budget for our employees to attend conferences, classes, and other events
• Three meals a day, catered from local restaurants
• Apple laptops and any equipment you want to customize your work station
• Free Scribd membership and a yearly reading stipend!
• Company events that include monthly happy hours and offsites (past events include Santa Cruz, bowling, arcades, geocaching, ropes courses, etc.)

In the meantime, check out our office and meet some of the team at https://www.scribd.com/about

Scribd values diversity, and we make all hiring and employment decisions based on merit, qualifications, competence, talent, and contribution, not who you are by choice or circumstance. We value the people who make Scribd a great place to work and strive to create an environment where your work is supported and personhood respected.
Share this job:
Backend Engineer - Content and Metadata
Scribd  
backend cs data science Dec 25 2019
Scribd
/skribbed/ (n).
1. a tech company changing the way the world reads
2. a membership that gives users access to the world’s largest online library of books, audiobooks, sheet music, news, and magazines

We value trying new things, craftsmanship, being an open book, and the people that make our team great.
Join us and build something meaningful.

Our team
The Content Engineering team is broadly responsible for catalog management and content metadata at Scribd. Supplying supplementary data to ebook and audiobook pages? That's us. Ensuring that all user-uploaded documents are useful, accessible, and legally available? That's us. Creating pipelines that build clean and well-structured data for Search, Recommendations, and Data Science to build amazing features from? That's us. Analyzing user reading activity and translating them into publisher payouts? That's us. We're a spoke within Scribd, connecting many engineering, product, and publisher-focused teams through data.

The majority of the team is based in San Francisco but there's a strong and growing remote contingent as well (much like Scribd overall). We use tools that emphasize asynchronous communication (Slack, Gitlab, Jira, Google Docs) and are ready and able to jump on a video call when text doesn't cut it. Regardless of the medium, solid communication skills are a must. We operate with autonomy (developers closest to the code will make the most well-informed decisions) while holding ourselves and each other accountable for using good judgement when faced with each day's unique challenges.

Our technical work is divided between our user-facing Rails application and our offline data warehouse (where much of our processing is done on top of Spark). Many of the systems we're responsible for - document spam detection, document copyright detection, topic extraction and classification, sitemap generation, and translating user activity into publisher payouts, just to name a few - span both environments, so engineers regularly work within both. Though the tech stacks differ between environments, the engineering work in both is the same - create data pipelines to ingest, process, clean, and layout the metadata coming from publishers and other external sources, as well as create new metadata from our vast content base.

The role
A Backend Engineer on Content Engineering can take many forms:

You may be a relatively new college or boot camp graduate, looking for your first job where you can learn the ropes from a team of experienced professionals. You have a place here. 

You may have a few years of experience and are looking for your next challenge. You have a place here. 

You may have built out a few systems alongside senior engineers and are ready to take on ownership of feature delivery. You have a place here. 

We look for engineers that aspire to learn and grow, that thrive on constructive feedback, and know they’ll be ready to step up when the opportunity presents itself. 

Office or remote?
We have a wonderful new office in San Francisco, as well as smaller offices in Toronto and New York. If you live close to one of those you'll find great people and a nice work environment.

If you don't live near one of those offices, we'd still love to have you! Scribd is expanding its remote workforce with the goal of finding the best employees regardless of location. Being a remote employee means providing your own productive work environment. Being a remote employee means providing your own productive work environment, and everything else is pretty similar to being an office employee. We expect remote employees to have solid communication skills, good judgement, and demonstrable personal responsibility. We also expect the same from our in-office employees, so you'll be in good company.

Nitpicky requirements
Backend Engineers on Content Engineering typically have:
• 0-6+ years of experience as a professional software engineer
• Experience or a strong interest in backend systems and data pipelines
• Bachelor’s in CS or equivalent professional experience

We present these in order to detail the picture of what we're looking for. Of course, every engineer brings something unique to the table, and we like nothing more than finding a diamond in the rough.

Required Questions
• What’s your favorite book that you’ve read recently?
• In one sentence, why does this role appeal to you?
Why we work here
• We are located in downtown San Francisco, within walking distance of Caltrain and BART
• Health benefits: 100% employer covered Medical/Dental/Vision for regular, full-time employees
• Generous PTO policy plus we close for the last week in December
• 401k matching
• Paid Parental leave
• Monthly wellness budget and fully paid membership to our onsite fitness facility
• Professional development: generous annual budget for our employees to attend conferences, classes, and other events
• Three meals a day, catered from local restaurants
• Apple laptops and any equipment you want to customize your work station
• Free Scribd membership and a yearly reading stipend!
• Company events that include monthly happy hours and offsites (past events include Santa Cruz, bowling, arcades, geocaching, ropes courses, etc.)

In the meantime, check out our office and meet some of the team at https://www.scribd.com/about

Scribd values diversity, and we make all hiring and employment decisions based on merit, qualifications, competence, talent, and contribution, not who you are by choice or circumstance. We value the people who make Scribd a great place to work and strive to create an environment where your work is supported and personhood respected.
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Marketing Operations Manager
manager data science machine learning computer vision healthcare Dec 18 2019
Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

Labelbox is hiring a Marketing Operations Manager to join our growing Marketing team you will be responsible for managing our marketing and sales operations infrastructure.

As an early marketing hire on our team, you will help:

  • Build-out and manage a marketing operations stack through best practices (we use Hubspot as our CRM).
  • Manage our CRM data quality.
  • Work closely with marketing and sales team members to build workflows and processes in our CRM (and peripheral tools) that mirror our lead generation and sales processes.
  • Configure and build dashboards with key metrics from sales and marketing teams.
  • Implement, and continuously improve, attribution and lead-scoring models.
  • Manage the inflow of inbound leads, ensuring leads are properly enriched and populated with sufficient data to empower SDRs to establish outreach with minimal friction.
  • Manage integrations between web, product, ad-platform and sales analytics tools.
  • Conduct routine reporting and provide ad-hoc insights to marketing, sales and management teams.
  • Monitor the health of our marketing and sales funnels and surface insights and suggestions as required.

The ideal candidate will have:

  • An undergraduate degree with an emphasis in Marketing, Business or a related field
  • 3+ Years Experience in a technical marketing or sales function
  • Familiarity with the following products and languages: Hubspot (or Salesforce), Marketing Automation Platforms (MAPs), Google Data Studio (or a similar visualization product), Zapier, Microsoft Excel, SQL, Google Analytics, Google Tag Manager, Facebook Ads Manager or Google Ads, Sales outreach tools such as Apollo.io
  • Exposure to both startup and enterprise marketing stacks
  • Familiarity with the B2B Marketing and Sales process

  • BONUS Points if you have experience working with data warehouses, Customer Data Platforms and DMPs.

Expertise in each and every function is not required, we are looking for candidates who exhibit full-stack marketing knowledge and the aptitude to learn and develop skills.


Labelbox is an equal opportunity employer.

No sponsorship is available for this position. Valid US Work Authorization is required.


Share this job:
Go Senior Software Engineer
golang senior c data science linux cloud Dec 10 2019
Our Senior Software Engineers focus on the design, development and overall lifecycle of our software products. You’ll join a team of high-performing engineers who strive to improve Circonus’ monitoring and analytics platform. As a senior staff member you’ll be expected to operate independently, though your day-to-day will often involve working with a small team to create, support, and deploy production applications.
In particular, we’re seeking someone to help lead an effort to rewrite an existing monolithic web application towards a Go microservices architecture. Prior experience with such a rewrite is strongly preferred - please mention it directly within your cover letter or resume.

Responsibilities

  • Work in the office or remotely, or both (but not at the same time)
  • Design, build, maintain, and document our APIs and services
  • Support our internal shift from a monolithic architecture over to a micro-service oriented model
  • Design and implement software in Go, Perl, C or whatever language is appropriate for the task
  • Complete unit, functional, and performance testing of produced deliverables
  • Work alongside the Product team to ensure high quality deliverables
  • Conduct peer reviews during design, coding and testing
  • Coach and mentor team members

Qualifications

  • 6+ years experience building, testing, and deploying high quality, highly reliable, scalable application servers and APIs in a team environment
  •  Strong experience in server-side development with Go and commonly used libraries; this is mandatory, please do not apply if you don't have real world Go experience
  • Passion for quality-oriented, software development best practices including unit and functional testing, automation, continuous integration, and low-dependency architectures
  •  Comfortable working with git and GitHub for version control, including opening/reviewing pull requests and distributed branching models
  • Experience working with project management software such as JIRA
  •  Excellent analytical, problem solving, and debugging skills
  • Excellent written and oral communication skills

Preferred Experience

  • Proficiency in Perl; our existing web application is wholly in Perl
  • Experience with Linux server administration
  • Experience with Docker
  • Experience re-architecting and/or re-implementing a legacy enterprise application
  • Experience and/or senior level technical knowledge of monitoring and analytics solutions
  • Experience working with cloud service providers such as AWS and Azure; it’s helpful to have worked directly on software that relied on these public cloud providers to have a good assessment of monitoring requirements in these environments
Circonus offers a powerful machine data intelligence platform to handle the world's most demanding use cases. From mission-critical IT infrastructure to data-intensive IoT applications, Circonus works with any tech and at any scale. Circonus uses advanced data science and patented technology to ingest and analyze machine data to deliver unmatched clarity, insights, and performance. From real-time alerts and fault detection to ML-based predictive analytics, Circonus helps companies optimize operations and deliver exceptional user experiences with confidence.
We enjoy a global reach, but our customers primarily cluster on the East Coast, California, and to a lesser degree, Europe. Our success stems from our industry-leading offering and our obsession with customer satisfaction.
Culturally, we operate like a startup. Small, agile teams with quick decisions and short, iterative cycle times. We relish our core values of respect, integrity, value and growth, among others. 
All of our positions include a discretionary PTO policy, health insurance, gym reimbursement, a generous 401(k), the opportunity for a bonus and more.
Share this job:
Senior Data Engineer - Spark expertise
scala postgresql senior data science docker aws Dec 05 2019

Position Summary

The Senior Data Analytics Engineer will build meaningful analytics that inform companies of security risk.  You will be working closely with our Data Science team, implementing algorithms and managing the analytic pipeline. We have over 1 PB of data, so the ideal candidate will have experience processing and querying large amounts of data.  

This role requires senior level experience in Spark, SQL and Scala. Our interview process will include live coding using these technologies!

Responsibilities

  • Manage the analytic pipeline using Spark, Hadoop, etc 
  • Leverage cutting-edge technologies to support new and existing and services and processes.
  • Quickly and efficiently design and implement in an agile environment
  • Work with other team members to implement consistent architecture
  • Drive projects through all stages of development
  • Actively share knowledge and responsibility with other team members and teams
  • Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.  

Skills and Experience:

  • Bachelor's degree (CS, EE or Math preferred) or equivalent work experience as well as interest in a fast paced, complex environment.
  • 5+ years of experience Scala preferred in a commercial environment 
  • Expert in Spark, experience with the Hadoop ecosystem and similar frameworks
  • Expert in SQL
  • Familiarity with various tools such as AWS and Docker and an instinct for automation
  • Strong understanding of Software Architecture principles and patterns.
  • Experience working with 3rd party software and libraries, including open source
  • Experience with Postgres

Traits:

  • Quick-thinker who takes ownership and pride in their work
  • A commitment and drive for excellence and continual improvement 
  • A strong sense of adventure, excitement and enthusiasm.
  • Excellent systems analytical, problem solving and interpersonal skills

Interview Process:

  • Initial Conversation with a SecurityScorecard Talent team to learn more about your experience and career objectives
  • Technical Interview with 1- 2 data engineers. This will include live coding in SQL, Spark, Scala.
  • Coding Exercise - take home exercise
  • Final Interview: Meet 1-2 engineering leaders
Share this job:
Student Advisor / Success Manager
data science Dec 04 2019
The Company 
Springboard is redefining professional education for the 21st century through courses in cutting-edge fields like data science and design. Our self-paced, online offerings give anyone, anywhere access to world-class learning resources. What’s more, we offer high-touch, human support throughout the student lifecycle: industry-expert mentors, career coaches, as well as dedicated student success managers. Through this hybrid approach, we’ve helped thousands of learners revamp their careers and, by extension, their lives. 

This position will support our team in San Francisco - while the opportunity is remote, you'll be required to be available during standard working hours, Monday - Friday, PST.

The Role
We’re seeking an empathetic, problem-solving, “people person” to serve as a Student Success Manager. You’ll be responsible for making sure students have an excellent onboarding, are engaged with and understand our learning experience, and stay motivated all the way through to graduation. In everything you do, you’ll directly fulfill our educational mission, and serve as a vital advocate for students, acting as their voice and championing product and process improvements to better serve them within our org.

Responsibilities:

  • Maintain a high level of student engagement and own success metrics like Activation, Retention, and Satisfaction. 
  • Guide students through each stage of their Springboard experience, with a key focus on encouraging good habits that will help students graduate successfully
  • Quickly address student questions, concerns, feedback, and more via email/phone/video calls
  • Develop strong working knowledge of Springboard’s student experience and best practices, acting as a trusted advisor for your students

Your typical day may include:

  • Helping students define and develop realistic and meaningful educational/career plans
  • Matching new students with mentors, based on your understanding of their needs 
  • Collaborating with our Mentor Team to ensure mentors are aware of students’ progress or concerns 
  • Proactively encouraging students who need help staying motivated
  • Acting as the voice of students internally (to product, marketing, course development etc.) by collecting and sharing student feedback, ideas and stories.

This job might be for you if you:

  • Are analytical and comfortable digging through information (feedback, interviews, data) to solve problems and make decisions
  • Are passionate about making our students personally and professionally successful and enjoy coaching others on motivation, goal setting etc.
  • Are resourceful and like to figure stuff out on your own.
  • Are an excellent communicator (both verbal and written); you articulate clearly and with empathy.
  • Love learning — especially about new industries/technologies — and want to work on a team that will invest in your personal/professional development.
  • (Bonus points if you) have 1-2 years of experience in customer-facing roles (i.e. Customer Success, Account Management) or in academic counseling
Share this job:
Senior Software Engineer, Fullstack
java python javascript c data science machine learning Dec 03 2019

Engineering for you is more about a clean codebase, paradigms and algorithms than languages, frameworks or platforms. You have chosen your favorite stack in which you have lots of experience, but you’re able to get stuff done in any environment you need to and with every change you leave the codebase better off than before.

You will be one of the first members of our engineering team and will work on many different projects and touch many different systems: from our app backends (REST webservices) to our demand forecasting service and our cash register. Because our team is new, you will get to influence which technologies we will use.

As a Senior Software Engineer, you will become a go to person to answer technical questions for the rest of the team.

Responsibilities:

  • Create new and work on existing systems across a wide range of projects (e.g. a clean and elegant API layer spanning across all of our legacy systems, backend APIs consumed by our web and mobile apps, production tooling for our machine learning models etc.)
  • Improve and maintain our production environment, for example by adding monitoring and alerting (DevOps)
  • Set up a modern development workflow for our team, including a continuous integration pipeline and automated deployments
  • Work closely together with our frontend engineering and data science teams
  • Support other developers in your team with technical guidance

Requirements:

  • Minimum of 3 years of software development experience in a general purpose programming language
  • BSc degree in Computer Science, similar technical field of study or equivalent practical experience
  • Ability to quickly get up to speed in any programming language or system if needed
  • Ability to tackle problems outside your comfort zone and get things done without supervision
  • Excellent spoken and written communication skills in English

Desirable:

  • Experience in any of the following programming languages: Java, C/C++, C#, Python, JavaScript, Rust or Go
  • Experience working with one or more from the following: web application development, Unix/Linux environments, distributed and parallel systems, service oriented architectures, REST APIs, developing large software systems
  • Experience working in teams following an agile software development methodology
  • Basic knowledge of German

We also have a role for Junior / Mid-Level developers available here.

Share this job:
Enveda data scientist
data science machine learning aws testing Dec 03 2019
About Turing Talent Programme
The Turing Talent Tech Careers Programme is a first-of-its-kind career empowerment programme for ambitious individuals in the technology sector. We have partnered up with Enveda to offer a data scientist role.

Through our programme, capture the added benefits of leadership development training, mentorship, and international peer network on top of your full time job with Enveda. 

About Enveda
At Enveda, we're re-imagining the roots of medicine with technology. Our inability to model the vast complexity of the human body and the infinite variables of the real world has led to more than 90% of drugs failing in clinical testing - so instead of depending on inbred mice or cells grown on plastic like everyone else, we're hunting for active molecules from plants that have been used by our ancestors for 1000s of years (and continue to be used by hundreds of millions today). We're endlessly optimistic about the resilience of these medicinal systems over millennia and are excited to unearth their potential at the most exciting time for technology in human history (see why ​here,​ ​here​, h​ere,​ and ​here​ just for a start). Using AI to prioritize potential drugs from 1000s of clinically used plants and precision AgTech to engineer their production, we're aiming to go from the lab to clinical trials with 3 new drugs in the next 5 years. Long-term, we will deliver multiple FDA approved medicines at a fraction of today's (unsustainable) R&D costs and emerge as the much-awaited pioneers in the "Reverse Translation" of human experience to validated drugs.

More details about Enveda here.

What will you be doing

  • Create a knowledge graph of the world’s information on natural medicines to make it computable
  • Develop new graph-based machine learning algorithms or applying state of the art techniques to mine insight from our biological networks
  • Create predictive models to identify the most interesting hypotheses to pursue in the lab
  • Design statistical models to predict best drug candidates and combinations from a mixture of potentially active phytochemicals
  • Work hand-in-hand with an experimental laboratory team and a bioinformatics team to analyze streams of cutting edge biological datasets to constantly improve our predictive power
  • Get in on the ground floor of a rapidly growing venture-backed US startup backed by top Angels and VCs
  • Be a co-owner of Enveda’s mission and vision, with generous equity compensation
  • Work remotely, with a headquarter in SF for when you want company!

Required Skills

  • An aspiring Data Scientist that is, first and foremost, passionate about applying technology to make life-changing drugs
  • Have an advanced degree in Computer Science or a related field
  • Have a background in data science or have worked with a large amount of data
  • Have experience building research prototypes or MVPs in an academic or industry setting
  • Have experience working with a programming language like Python
  • Have some knowledge of modern tools for ML such as TensorFlow, PyTorch, PyTorch Geometry, or PySpark
  • Ability to think big-picture and handle the minutiae simultaneously
  • Demonstrated desire for continuous learning and improvement
  • Strong communication 

Desired Skills

  • Have some background in biology or chemistry (ideally)
  • Have worked with graph-based data structures
  • Have experience with using and deploying the latest graph algorithms and predictive models (GNN’s, link prediction and so on..) 

Compensation

  • £48k to £70k 

Start date

  • Immediately

Location

  • Remote
About Turing Talent Programme training:
Turing Talent Programme will kick off with a 2 to 4 week intensive bootcamp training that covers technical skills and soft skills. The technical skills will include those that specifically correspond to this placement with Deloitte, with a focus on software engineering. You will dive deeper into fullstack languages and frameworks, and how to apply this knowledge in your new role with DFA. ElasticSearch, AWS, and JIRA will all be part of the training. 


Turing Talent is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Share this job:
Senior Rust WebGL Developer
Luna  
senior javascript data science ui design d3-js Nov 11 2019

Senior Graphics Developer

Luna is looking for a senior graphics developer to take charge of the design, development, and evolution of a new WebGL-based GUI for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you bring strong technical skills and a passion for performance, this could be the role for you.

As a senior graphics developer you'll be a key part of bringing the vision for Luna 2.0 into reality, with your work being integral to the realisation of the next iteration of Luna. You'll be able to collaborate with world-class team of skilled engineers, community managers, and business developers (from Bloomberg, PayPal, and GitHub to name a few), and make your indelible mark on the future of Luna.

What You'll Do

As a senior graphics developer, you'll be responsible for designing and building a high-performance renderer based on web technologies for use in the Luna IDE: Luna Studio. This will involve:

  • Working closely with stakeholders and customers to design the new GUI for Luna Studio.
  • Developing a design for the new renderer that will be used to implement this GUI.
  • Implementing the new renderer in a high-performance manner on top of WebGL and Rust (via Web Assembly).
  • Building a next-generation UI framework using this renderer for use in Luna Studio.
  • Using this UI framework to build the new GUI for Luna Studio itself.
  • Debug performance issues to ensure that the renderer is capable of achieving high performance even on low-powered hardware.
  • Creating visualisations for data science libraries using the renderer and D3.js.

The Skills We're Looking For

We have a few particular skills that we're looking for in this role:

  • A strong focus on both user experience and aesthetics.
  • 3+ years experience with WebGL (or OpenGL).
  • A deep understanding of graphics abstractions including: VAOs, FBOs, PBOs, buffer types, and asynchronous computation modes.
  • A deep understanding of GPU techniques including: efficient buffer management, efficient GLSL construction, high-performance vector and font rendering, post-processing, 3D scene description (with nested objects), lights, cameras, and animation.
  • 2+ years experience with Rust, including experience writing unsafe code for FFI and performance, and using the macro system for metaprogramming. You should be able to write idiomatic rust code.
  • Practical experience building high-performance graphical interfaces for end-user-facing applications.

As part of the hiring process for this job posting we're very interested in your previous work in these areas. Please link us to your Rust projects, blog posts and shadertoy shaders if you have them! It's important for us to understand your experience at the start of the hiring process.

It would be a big bonus if you had:

  • Experience with Rust's WASM toolchain, with wasm-bindgen, and experience with WASM itself.
  • Experience with visual programming systems such as Houdini, Max/MSP, Lab VIEW, or Touch Designer.
  • Knowledge of the runtime and memory models used by various JavaScript virtual machines.
  • Knowledge of D3.js, and experience using it to visualise data.

Avoid the confidence gap. You don't have to match all of the skills above to apply!

Who You'll Work With

You'll be joining a distributed, multi-disciplinary team that includes people with skills spanning from compiler development to data-science. Though you'll have your area to work on, our internal culture is one of collaboration and communication, and input is always welcomed.

We firmly believe that only by working together, rather than putting our team members in their own boxes, can we create the best version of Luna that can be.

The Details

As part of the Luna team you'd be able to work from anywhere, whether that be at home, or on the go! We have team members distributed across the world, from San Francisco, to London, to Kraków. We welcome remote work and flexible schedules, or you can work from the Kraków office (or our planned SF office) if you'd like. We can provide competitive compensation and holiday, as well as the possibility of equity as time goes on.

How To Apply?

Send us an email at jobs@luna-lang.org, and tell us a little bit about yourself and why you think you'd be a good fit for the role! You can also tell us about:

  • Some of your past work or projects.
  • Why you'd like to work on Luna, and where you imagine Luna being in 5 years.
  • The most important features of a team that you'd like to work in.
  • Whether you take pride in your ability to communicate clearly and efficiently with your team.
Share this job:
Data Science Course Mentor
python data science machine learning Nov 07 2019

Click here to apply

Who We Are
At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time.  We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do. 

The Position At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time.  We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do.  Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed technologists. As a Course Mentor, you will support students by acting as an advisor, counselor, and support system as they complete the course and land their first industry job. To achieve this, you will engage with students using the below range of approaches, known as Engagement Formats. Course Mentors are expected to provide support across all formats when needed. 

  • Mentor Sessions: Meet with students 1-on-1 in online video sessions to provide technical and professional support as the student progresses through the curriculum.
  • Group Sessions: Host online video sessions on topics of your expertise (in alignment with curriculum offerings) for groups of student seeking live support between mentor sessions. 
  • Grading: Reviewing student checkpoints submissions and delivering written feedback, including analysis of projects and portfolios. 
  • Technical Coaching: Provide in-demand support to technical questions and guidance requests that come to the Technical Coaching team through text and video in a timely manner. This team also provides the TA support for immersive programs. 
  • Assessments & Mock Interviews: Conduct 1-on-1 mock interviews and assessments via video calls and provide written feedback to students based on assessment rubrics. 

In addition to working directly with students, Course Mentors are expected to maintain an environment of feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment. Requirements

  • Minimum of 1 year professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
  • Proficiency in SQL, Python
  • Professional experience with Hadoop and Spark a plus
  • Excellent written and verbal communication
  • High level of empathy and people management skills
  • Must have a reliable, high-speed Internet connection

Benefits

  • This is a part-time role (10-25 hours a week)
  • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
  • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
  • Full access to all of Thinkful Courses for your continued learning
  • Grow as an Educator

Apply
If you are interested in this position please provide your resume and a cover letter explaining your interest in the role. Thinkful can only hire candidates who are eligible to work in the United States. We stand against any form of workplace harassment based on race, color, religion, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status. Thinkful provides equal employment opportunities to all employees and applicants. If you're talented and driven, please apply.

At this time, we are unable to consider applicants from the following states: Alaska, Delaware, Idaho, New Mexico, North Dakota, South Carolina, South Dakota, West Virginia, and Wyoming Click here to apply 

Share this job:
Data Scientist
Crisp  
python data science cloud design Nov 07 2019

Data Scientist

Here at Crisp, we value the strength in teamwork, and strongly believe that it’s the key to Crisp’s success. By bringing together bright, motivated creators, wherever they live and work, we are leveraging humanity’s diversity of experience and background in order to understand the challenges facing our food supply, and solve them together. Come join us, and help build the type of business you’d like to be a part of.

We are a socially conscious, distributed team. We give you the opportunity to solve challenges in the global food industry while living where you’re most comfortable and working in areas where you can help foster and grow the community that you are a part of.

As one of the first members of the data science team at Crisp you have will have a unique opportunity to turn previously scattered and inconsistently structured data into immediately actionable food industry insights to reduce waste, increase freshness and much more.

You have a proven track record of reading data and making solid conclusions. You know both the art and science of analytics - not only do you know how to find answers in the data, you also know which questions should be asked in the first place and what data can help us further bolster our conclusions. You love engaging with customers, learning about their challenges and then diving into the data to see how to solve them!

Signs of a great candidate

  • Collaborative. You know that your team members’ perspectives will make your solutions better. Similarly, you use your strengths to help us grow together.
  • Customer focused. User experience trumps everything. You understand that a product will have little value if customers don't enjoy using it.
  • Disciplined and reliable. We are a distributed company and you enjoy the benefits of working distributed while consistently delivering what you have committed to. When you hit a snag, you communicate and reset expectations early.
  • Appreciative of honest feedback. You know that the best way to learn and grow is through constructive feedback delivered kindly, but without unnecessary ambiguity. You view feedback given to you as an opportunity to get better and strive to do the same for others.
  • Work smarter and harder. You often identify a problem, design a solution and bring it to a state of completion - with others, or even on your own. You are fluent with your toolchain and can deliver well-designed, well-tested production-ready features quickly. You find ways of eliminating or automating stuff that is uninteresting or wasteful, rather than complaining about them.
  • Analytical and practical mind. You strive for simple, precise solutions to complex problems. Complex solutions are only acceptable when absolutely needed. You strive for correct solutions, but know what actually matters and when to make compromises. You know when to ship and when to optimize.

Crisp’s tech stack

  • Modern libraries and frameworks.  We use Python with Pandas and Numpy for data science in production.  We use technologies like Jupyter and R during EDA.
  • Diverse technologies and algorithms, solving real world problems.  At Crisp you’ll work with a wide range of technologies and algorithms on a daily basis, in a focused effort to solve large scale problems.
  • Continuous deployment. Code is never far from being deployed to production, because if it’s not in production, it’s not solving problems in the real world. Our branch time spans are short, and features under development are hidden behind feature flags.
  • Cloud first. As a services offering in the 21st century, the cloud isn’t the future, it’s the present. We’re fully invested in using the features offered by our cloud provider in order to minimize technical debt and maximize productivity.
  • Micro-services. Not for the sake of the buzz, but when they make sense. By adopting a modern, thoughtful services architecture we’re able to scale organizationally, reduce technical debt, and maintain a high, sustained velocity.

We are building a team of developers with a breadth of combined experiences so that we can collaboratively build great products. There are no hard requirements on specific background, experience or geographical location. Instead we’re looking for individuals that are capable, reliable, and hoping to grow along with us. Do you have strengths you can share? If so, we’d love to hear from you!

Share this job:
Senior Data Scientist
komoot  
aws python senior backend data science cloud Nov 06 2019

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and we are consistently ranked amongst the highest-grossing apps in both Google Play and the App Store.

To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services. With over 8 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience.

We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.


What you will do

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like Pandas, Numpy, Jupyter Notebooks, Seaborn, Scikit-Learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Then send us the following:

  • Your CV in English
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Sr. Software Engineer
docker data science machine learning saas design Nov 02 2019

Overview

We’re looking for experienced Go software engineers that also have experience working with and extending Kubernetes (CRDs, Operators, Scheduling, etc). As a software engineer at Carbon Relay you’ll help build products that bridge the gap between software engineering and data science. Our products help our customers using data science-enabled applications without needing a data scientist.

As a software engineer at Carbon Relay you’ll build products that are integrated with Kubernetes clusters which enable customers to automatically configure their applications for the ideal balance of performance and cost. You'll also be working on a set of microservices built using a mix of Go and Python. This will include integrating services developed by our data science team as we continue to enhance our offerings.

Responsibilities

  • Participate in design and discussion of customer features
  • Participate in design and discussion of internal and external components
  • Implement features as part of a Kubernetes CRD
  • Implement features as part of SaaS-based microservices (Go, Python)
  • Contribute to and enhance internal Kubernetes CRDs/Operators
  • Work alongside data science teams to bring data science and machine learning into customer facing features

Qualifications

  • 3-5 years of experience as a software engineer
  • Working experience with Kubernetes and Docker
  • Worked on at least one shipped, customer-facing product
  • Some experience with Python
  • Experience with Agile/Scrum teams
  • Works well with a small, fast moving teams
  • Work with distributed teams
  • Experience working with Git and GitHub

Preferred

  • Experience with GCP/GKE
  • Experience developing with Go
  • Experience developing SaaS applications
  • Experience with microservice architectures
  • Experience building components to extend Kubernetes (CRDs, controllers, scheduler extensions, etc)
  • Experience working alongside data science teams
Share this job:
Lead Machine Learning Engineer
YouGov  
python machine learning data science design devops Oct 30 2019

We don’t just collect data, we connect data. YouGov is an international data and analytics group. Our value chain is a virtuous circle consisting of a highly engaged online panel, innovative data collection methods, powerful analytics technology, delivery of high-margin syndicated data products, expert insights and an authoritative media presence. Our core offering of opinion data is derived from our highly participative panel of 6 million people worldwide who provide us with live, continuous streams of data. We capture these streams of data via our variety of data collection platforms and collect them together in the YouGov Cube, our unique connected data library.

Working as part of the Data Science team, you will collaborate with and coordinate data-minded people to convert vast troves of raw consumer data into meaningful insight by developing and deploying of machine learning models, automated ETL applications, RESTful microservices and browser-based user interfaces. 

What will I be doing day to day?

  • Build new data science products for internal and external clients
  • Collaborate to design automated analytical solutions (e.g. fraud detection, prevention)
  • Optimise applications to increase performance, reliability and test coverage
  • Help to streamline the feature engineering and optimise the pipelines
  • Curate and promote best collaborative processes within the data science domain
  • Train and mentor team members to enable them to be their best

What do I need to bring with me?

  • Proficiency in Python as well as Scikit-Learn, Pandas, NumPy, SciPy, Tensorflow, PyTorch.
  • Have knowledge in building substantial ETL pipelines
  • Parallel computing/programming experience
  • Experience with Agile, TDD and DevOps development lifecycle
  • Enjoy solving complex technical problems, and supporting others to do so
  • Familiar with both old and new technologies and aware of the problems they purport to solve, and willing to keep up-to-date
  • Can work within a cross-functional team with both, technical and non-technical colleague

Any additional info:

This role can either be based in our Warsaw tech hub, or be 100% remote.

Share this job:
Machine Learning Engineer
Qntfy  
python docker machine learning data science testing api Oct 29 2019

Qntfy is looking for a talented and highly motivated ML Engineer to join our team. ML Engineers are responsible for building systems at the crossroads of data science and distributed computing. You will do a little bit of everything: from tuning machine learning models, to profiling distributed applications, to writing highly scalable software. We use technologies like Kubernetes, Docker, Kafka, gRPC, and Spark. You aren’t a DevOps, but an understanding of how the nuts and bolts of these systems fit together is helpful and you aren't a data scientist, but understanding how models work and are applied is just as important.

U.S. Citizenship Required Responsibilities

  • Collaborate with data scientists to get their models deployed into production systems.
  • Develop and maintain systems for distributed model training and evaluation.
  • Design and implement APIs for model training, inference, and introspection.
  • Build tools for testing, benchmarking, and deploying analytics at scale.
  • Interface with the technical operations team to understand analytic performance and operational behavior.
  • Write and test code for highly available and high volume workloads.

Qualifications

  • BS or Master’s degree in Computer Science, related degree, or equivalent experience.
  • 5+ years experience with software engineering, infrastructure design, and/or machine learning.
  • Familiarity with Python and machine learning frameworks, paricularly Scikit-learn, Tensorflow, and Pytorch.
  • Experience with distributed machine learning using tools like Dask, Tensorflow, Kubeflow, etc.
  • Write well-structured, maintainable, idiomatic code with good documentation.
  • Strong work-ethic and passion for problem solving.

Preferred Qualifications

  • Machine learning API development competencies.
  • Golang development experience.
  • Container orchestration and optimization knowledge.
  • Proficiency designing, implementing, and operating large-scale distributed systems.
  • Prior experience working in a distributed (fully remote) organization.

Qntfy is committed to fostering and supporting a creative and diverse environment. Qntfy is an equal opportunity employer, and as such will consider all qualified applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

Share this job:
Data Engineer
java python aws php data science big data Oct 24 2019

This position can be remote, but US based candidates only.

About Us:

Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.

DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!

Want to learn more about who we are? Check us out here!

Job Description: 
Dealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.

We are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications

Required Experience

  • 2-5 years experience as a data engineer in a professional setting
  • Knowledge of the ETL process and patterns of periodic and real time data pipelines
  • Experience with data types and data transfer between platforms
  • Proficiency with Python and related libraries to support the ETL process
  • Working knowledge of SQL
  • Experience with linux based systems console (bash, etc.)
  • Knowledge of cloud based AWS resources such as EC2, S3, and RDS
  • Able to work closely with data scientists on the demand side
  • Able to work closely with domain experts and data source owners on the supply side
  • An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.

Preferred Experience

  • College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics) 
  • Experience with Apache Kafka, Spark, Ignite and/or other big data tools 
  • Experience with Java Script, Node.js, PHP and other web technologies.
  • Working knowledge of Java or Scala
  • Familiarity with tools such as Packer, Terraform, and CloudFormation 

What we are looking for in a candidate:

  • Experience with data engineering, Python and SQL
  • Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform
  • A person who loves data and all things data related, AKA a self described data geek
  • Enthusiasm and a “get it done” attitude!

Perks:

  • Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), Eye Med Vision
  • 401k plan with company match
  • Tuition Reimbursement
  • 13 days paid time off, parental leave, and selected paid holidays
  • Life and Disability Insurance
  • Subsidized gym membership
  • Subsidized internet access for your home
  • Peer-to-Peer Bonus program
  • Work from home Fridays
  • Weekly in-office yoga classes
  • Fully stocked kitchen and refrigerator

*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.

Share this job:
R Developer
YouGov  
python data science machine learning cloud senior frontend Oct 16 2019

Crunch.io, part of the YouGov PLC, is seeking a talented, motivated, and versatile human to help lead the development of our R data science products. Crunch provides a modern platform for survey data analysis, and a central feature of our product is the ability to manipulate and analyze datasets stored in the cloud using R. As senior R developer, you will have three main responsibilities. First, you will work with the rest of our team to design and implement novel features that deliver real value and change our clients’ workflows for the better. Second, as the primary point of contact between our R user community and the development team, you will serve as their voice in product development. And third, you will often directly help clients manipulate and explore data using Crunch, including helping clients design and implement workflows that incorporate Crunch.  

Key responsibilities:

  • Teaching users how to work with the library through documentation and direct conversations.
  • Writing scripts that help clients implement Crunch and make it a part of their workflow, including ETL, data analysis, and outputs.  
  • Developing and maintaining our core R packages, including new feature design, comprehensive testing, and documentation
  • Supporting our community of R users by responding to feature requests and triaging bug reports
  • Evangelizing our product and educating our R user base by contributing to our technical blog and helping enrich our support documentation
  • Translating API speak to R that feels natural and native
  • Engaging with and contributing to the broader open source R ecosystem

Depending on your interests and skills, there are opportunities to get involved in:

  • API design: developing good conventions that enable our platform to scale and make it easy for client applications to consume them
  • JavaScript development, helping our frontend developers implement features you've utilized in R
  • Product management, building on your interactions with our users to shape our product roadmap and feature design
  • Python development, ranging from implementing APIs you need for the R packages, to  statistical modeling, numerical computing, machine learning, and natural language processing

In any given week, you might implement an R interface for a new API our backend has added, write a blog post introducing that new feature, track down a bug report from a user, write a test that reproduces the issue, and assist customers in implementing Crunch via the Crunch R packages. 

Qualifications:

  • Expert-level skills in R, including experience delivering code that others rely on to do their work. Prior experience creating and maintaining R packages is highly valued.
  • Serious commitment to high development standards, including comprehensive testing, in whatever language you're working
  • Demonstrated ability to work with a team of peers, understanding and respecting the responsibilities and expertise developers, designers, QA folks, and others bring to the project
  • Eagerness to take ownership of projects and deliver results on schedule
  • Experience in a "data science", such as social science, market research, or data visualization, is a plus.
Share this job:
Senior Data Engineer
python senior data science big data cloud Oct 16 2019

PowerInbox is looking for a Senior Data Engineer

*This job is fully remote (only in the USA, though) with the option to work from out NYC office. We keep EST work hours*

If you join us, what will you do?

Build and maintain a real-time big data pipeline and reporting system for powerinbox. The data pipeline will feed our AI and analytics platform. The reporting system will automatically distribute reports to recipients on a configurable schedule. As needed, you will provide special reports as requested by sales and operations teams. This role offers opportunities to work with big data, data science, cloud computing, and the latest software technology.

Specific Goals

  • Build and maintain a data pipeline for powerinbox machine learning.
  • Assist with the development of a data warehouse on which reports are derived.
  • Process 8 billion event transactions each month.
  • Assure data is captured and stored without loss.
  • Write code to provide reports for powerinbox.
  • Write a system that will run reports on a configurable schedule.
  • Respond to ad-hoc requests for information.

In order to be great at your job,

You Are

A fast learner; have great analytical skills; relentless and persistence in accomplishing goals; enthusiastic with an infectious personality.

You Work

Efficiently; with flexibility; proactively; with attention to detail; to high standards.

Together We

Emphasize honesty and integrity; require teamwork; have open communication; follow-through on commitments; stay calm under pressure.

You Have

  • Four to six years experience with Python or R
  • Three or more years experience developing and deploying software on Linux
  • Three or more years working with SQL
  • At least two years experience providing data analysis
  • Professional experience with data science knowledge
  • Working knowledge of BI tools and software

This is extra, but if you have it, it will make us happy

  • Experience working remotely
  • Knowledge of/interest in the digital and AdTech landscape
  • Experience working with big data

About PowerInbox

Why We Are

We believe that digital messaging is not meant to be stationary and static but relevant and hyper-targeted, filled with dynamic content.


Who We Are

We are a digital monetization startup ecosystem that is always open to new talent


What We Are

We at PowerInbox boost your revenue and brand engagement through real-time advertising, and native ad displays. 


If interested please send your resume to hr@powerinbox.com

Share this job:
QA Engineer
qa python testing ruby css php Oct 14 2019

QA is an important function within Scrapinghub. The QA team works to ensure that the quality and usability of the data scraped by our web scrapers meets and exceeds the expectations of our enterprise clients.


Are you passionate about data and data quality and integrity?

Do you enjoy using programming languages and tools to automate testing, analyze data, and speed up manual processes?

Are you highly customer-focused with excellent attention to detail?


Due to growing business and the need for ever more sophisticated QA, we are looking for a talented QA Engineer with both automated and manual test experience to join our team. As a Scrapinghub Engineer, you will take automated, semi-automated, and manual approaches and apply them in the verification and validation of data quality. Although Python is our preferred language for automation; demonstrable experience of automating things in other languages (e.g. Groovy, Ruby, PHP etc.) is welcome. And while we are primarily interested in the quality assurance of data, your experience in testing applications, systems, UIs, APIs etc. will be brought to bear on the role.


In addition, while experience in programming languages other than Python is welcome, you must be comfortable at test automation using your language(s) of choice. Please describe this experience clearly in your CV or cover letter, beyond simply listing the programming language as one that was used in the role you held.


JOB RESPONSIBILITIES:

  • Understand customer web scraping and data requirements; translate these into test approaches that include exploratory manual/visual testing and any additional automated tests deemed appropriate.
  • Provide input to our existing test automation frameworks from points of view of test coverage, performance, etc.
  • Ensure that project requirements are testable; work with project managers and/or clients to clarify ambiguities before QA begins.
  • Take ownership of the end-to-end QA process in newly-started projects.
  • Work under minimal supervision and collaborate effectively with Head of QA, Project Managers, and Developers to realize your QA deliverables.
  • Draw conclusions about data quality by producing basic descriptive statistics, summaries, and visualisations.
  • Proactively suggest and take ownership of improvements to QA processes and methodologies by employing other technologies and tools, including but not limited to: browser add-ons, Excel add-ons, UI-based test automation tools etc.


REQUIREMENTS

  • BS degree in Computer Science, Engineering or equivalent.
  • Demonstrable programming knowledge and experience, minimum of 3 years (please provide code samples in your application, via a link to GitHub or other publicly-accessible service).
  • Minimum 3 years in a Software Test, Software QA, or Software Development role, in Agile, fast-paced environment and projects. Solid grasp of web technologies and protocols (HTML, XPath, JSON, HTTP, CSS etc.); experience in developing tests against HTTP/REST APIs.
  • Strong knowledge of software QA methodologies, tools, and processes.
  • Ability to formulate basic to intermediate SQL queries; comfortable with at least one RDBMS and its utilities
  • Excellent level of written and spoken English; confident communicator; able to communicate on both technical and non-technical levels with various stakeholders on all matters of QA

DESIRED SKILLS:

  • Knowledge and experience of Scrapy and other Python-based scraping frameworks a distinct advantage.
  • Prior experience in a Data QA role (where the focus was on verifying data quality, rather than testing application functionality).
  • Interest in and flair for Data Science concepts as they pertain to data analysis and data validation (machine learning, inferential statistics etc.); if you have ideas, mention them in your application.
  • Knowledge of JavaScript.
  • Knowledge of and experience in other technologies that support a modern cloud-based software service (Linux, AWS, Docker, Spark, Kafka etc.)
  • Previous remote working experience.
Share this job: