Remote Data Science Jobs

Today

Director of Data
Loom  
executive data science finance healthcare May 29
About Loom:
Loom is on a mission to empower everyone at work to communicate more effectively, wherever they are. We are already trusted by over 4M users across 90k+ companies. Our customers are global and use Loom at work at world-class companies including HubSpot, Square, Uber, GrubHub, and LinkedIn.

Founded in 2016, Loom has raised $45 million from top-tier investors including Sequoia Capital, Kleiner Perkins, the Slack Fund, and the founders of Instagram, Figma, and Front.

The Role:
Loom is looking for an experienced and empathetic Director to lead our Data Team, scale it up, lead it, and inspire everyone they interact with. The Data Team is responsible for data engineering, data science and data analytics efforts across Loom. Your team will own all of the infrastructure that supports data efforts and you will work cross-functionally to help identify new work and deliver insights to multiple teams across the company. You will work closely with stakeholders to define the data roadmap, make necessary infrastructure and engineering investments, make hiring and scope recommendations, and deliver against them. You have a technical background, though you aren’t expected to write code in the critical path. You are an inclusive leader with a curiosity for technology and it shows in your engagement with your peers and the team. Given Loom is a remote-first company, you ideally have experience managing remote and / or distributed teams.

You will:

  • You will lead and manage our Data Team, working closely with multiple functional stakeholders within the company, including engineering, go-to-market, product and finance to deliver on the key data needs within the company to help accelerate decision making and inform future investments.
  • You will manage data engineering, data science and our analytics team, understanding the unique growth and development needs of each area, fostering a collaborative environment and building a world class data organization.
  • You will manage and advise our Data Team on technical house-keeping initiatives and ensure we are able to consistently improve the scalability, quality and performance of our data infrastructure for the long term.
  • You are a business minded leader, with proven experience contributing key learnings that have helped the organization execute effectively and grow to the next level.
  • You will recruit, hire and train your team, building a sense of belonging and emotional safety. Growing each member from a place of empathy and good intentions.
  • You will help pair and brainstorm with engineers, data scientists and analysts on architectural and technical topics and foster an environment of learning and growth.

What We're Looking For:

  • Managed a team of 5 or more data scientists, data engineers or analysts
  • 4+ years of professional full-time experience as a data scientist or engineer
  • 2+ years of management experience
  • Led projects with multiple stakeholders and helped drive towards clarity and decision velocity
  • Enterprise or teams-oriented product experience
  • Track record of hiring, retaining, and growing engineering, data science and analyst talent
  • Track record of shipping new products and iterating on them over time
Perks at Loom:
* Competitive compensation and equity package
* 99% company paid medical, dental, and vision coverage for employees and dependents (for US employees)
* Flexible Spending Account (FSA) and Dependent Care Flexible Spending Account (DCFSA)
* Healthcare reimbursement (for International employees) 
* Life insurance
* Long-term disability insurance
* 401(k) with 5% company matching
* Professional development reimbursement
* Mental health and wellness reimbursement
* Gym reimbursement
* Unlimited PTO 
* Paid parental leave
* Remote work opportunities 
* Yearly off-site retreats (this year was in Barbados)
 
SF office perks
* Daily lunch on-site
* Unlimited snacks & drinks
* Remote week every other month

Remote-specific perks
* Home office & technology reimbursement
* Co-working space reimbursement 
* New-Hire on-boarding in San Francisco (optional)

Loom = Equal Opportunity Employer:
We are actively seeking to create a diverse work environment because teams are stronger with different perspectives and experiences.

We value a diverse workplace and encourage women, people of color, LGBTQIA individuals, people with disabilities, members of ethnic minorities, foreign-born residents, older members of society, and others from minority groups and diverse backgrounds to apply. We do not discriminate on the basis of race, gender, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status. All employees and contractors of Loom are responsible for maintaining a work culture free from discrimination and harassment by treating others with kindness and respect.
Share this job:

Last Week

Software Engineer, Behavior Planning
Voyage  
data science machine learning linux cpp May 26
Voyage is delivering on the promise of self-driving cars.

Voyage has built the technology and services to bring autonomous transportation to those who need it most, beginning in retirement communities. Whether residents face mobility restrictions or just want to take a ride, Voyage takes pride in getting all our passengers to their destination safely, efficiently, and affordably. Our journey begins in calmer communities, but we won't stop until anyone, anywhere can summon a Voyage.

The Voyage Behavior Planning Team is responsible for developing algorithms that allow the vehicle to take the best actions. Based on the output of our Motion Prediction module, Behavior Planning’s task is to find the best motion plan that the vehicle should follow in order to make progress, while keeping the trip both safe and comfortable. You will develop models to encode typical vehicle behavior, including models to handle lane changes, intersections, and similar actions. 

As part of the broader Autonomy Team, you will also interact on a daily basis with other software engineers to tackle highly advanced AI challenges. All Autonomy Team members will work on a variety of problems across the autonomy space, contributing to the final goal of building the most advanced autonomous driving technology available for communities around the world.

Responsibilities:

  • Design models to handle how other road users interact with our car. Evaluate the performance of such models on real-world and simulated data sets
  • Dive into data, explore, uncover and understand the behaviors of road users such as cars, bikes, golf carts, and pedestrians; leveraging machine learning and statistics where appropriate
  • Architect and implement decision making algorithms into production-level code
  • Work closely with developers from planning, infrastructure, localization, and perception teams to debug, fine-tune, and deploy production systems

Requirements:

  • 3+ years of industry experience with fluency in C++, including standard scientific computing libraries
  • Experience using modern software engineering tools (e.g., version control, CI, testing)
  • Strong applied math background (linear algebra, statistics, probability)
  • Familiarity with any of (task planning, motion planning, motion prediction, controls)
  • Practical experience in data science, modeling, and analysis of large datasets is a huge plus
  • Experience with software system architecture design
  • Experience in Linux environments is desired
We are an equal opportunity employer and value diversity at our company. Women, people of color, members of the LGBTQ community, individuals with disabilities, and veterans are strongly encouraged to apply. 
Share this job:

This Month

Data Strategist
data science May 20
The Data team at Babylist powers data-driven decision making through all aspects of the company’s business; we are not a service team. We are a product function with a strong focus on three core aspects of data -- engineering, analysis, and data science. We are looking for an experienced Data Strategist to join our small and growing team.

You are someone who has the right blend of analytical/data skills, a strategic product mindset, and excellent organization and communication skills. You are not merely a builder of dashboards, but a critical thinker who asks the right questions and seeks answers with data. You also have several years of professional experience in a cross-functional, analytical role.  

As our Data Strategist, you will work directly with stakeholders across Marketing, Product, Engineering, and Operations to both help us understand the health of the existing business but more importantly, to identify new opportunities for growth. This is an excellent opportunity for someone who wants to have a deep business impact while honing their craft as an analytics professional.

What You'll Do

  • Own the roadmap, design and development of core data team products (e.g. dashboards, reports, analyses)
  • Work directly with stakeholders to translate questions and hypotheses into structured analysis
  • Effectively present and communicate analytics findings across the company
  • Work with the data engineer and other team members to define requirements and a roadmap for Babylist’s data platform

What You've Done

  • A Bachelor's or Master's degree with a strong quantitative focus
  • Statistical methodology: experiment design, multivariate analysis
  • Strong SQL fluency: ability to work with large datasets and advanced queries
  • Experience with a scripting language like Python
  • Superb problem synthesis and communication skills
  • Experience with tools of the trade: Tableau, Periscope, Chart.io, Excel, Jupyter Notebook
  • Understanding of data warehousing and relational data modeling fundamentals
  • An impact-oriented mindset; demonstrated ability to ask the right questions, prioritize work, and manage stakeholder expectations
  • Keen attention to detail and data integrity
  • An interest in or experience in data science and ML is highly desired
About Babylist

At Babylist, we help expecting parents get exactly what they need for the arrival of their new baby. We have a large and rapidly growing user base of passionate parents-to-be who are making important purchasing decisions for one of the biggest events in their lives, which is both exciting and overwhelming. Our core product is our universal baby registry. Currently one in two first-time expecting families in the United States actually create a baby registry at Babylist.com. In 2019, over $400 million worth of gifts were purchased off of Babylist registries.

Why You Will Love Working at Babylist:

 • We get stuff done
 • We have a real impact on people’s lives
 • We're passionate about our users and we genuinely appreciate them
 • We work at a sustainable pace for long-term success (yes, we’re profitable)
 • We are growing and have meaningful opportunities for career advancement
 • We’re a technological and data-driven business
 • We believe in autonomy and reward taking initiative
 • We have experienced leadership that is always open to new ideas

Benefits:

 • Competitive pay
 • Competitive health benefits including company-funded medical, dental, and vision
 • 401(k), FSA plans, and disability insurance
 • Flexible, paid parental leave policy
 • Flexibility to work from home and prioritize home life as we navigate the impacts of COVID-19
 • Easy access to BART and commuter assistance (when we return to the office)
 • We work at a sustainable pace; in general we don't work late or on weekends, and most employees WFH on Wednesdays

If your experience is close to what we’re looking for, please consider applying. Experience comes in many forms – skills are transferable, and passion goes a long way. We know that diversity makes for the best problem-solving and creative thinking, which is why we’re dedicated to adding new perspectives to the team and encourage everyone to apply.
Share this job:
Senior DevOps Engineer
aws ansible terraform sdn sysadmin devops May 09

At Underline, we are driven by a singular mission: to ensure the vibrancy of our nation through building intelligent community infrastructure. We work with communities across the United States to design, finance, and construct open access fiber-optic networks. We believe these networks are the essential foundation for ultra-fast access to information, a competitive market for content and services, job formation and reskilling for workers, distributed healthcare services, new wireless solutions, and resilient modern infrastructure including responsible energy creation.  

We’re looking for a full-time Senior DevOps Engineer to join our Engineering team. Our Engineering team responds to the needs of the business, delivering well-designed and strategically aligned technology solutions in network architecture and design, customer acquisition, network service provider connectivity, automated operations, and support systems.

As Senior DevOps Engineer, you’ll work closely with our business and engineering teams to design and maintain our AWS, Software-Defined Networking, and configuration management environments. You will be exposed to all production services and infrastructure and will have a high degree of both responsibility and impact. You will get to work closely with leadership, product, engineering, our vendors, and our customers in order to create a best in class integrated system. Finally, you will provide some IT support for our small but growing team!

Join us as we help support the vitality and resilience of communities. 

What You'll Do

  • Advise in strategic technology decisions for internal systems and community fiber optic networks
  • Design, build, and maintain SD-WAN automation for configuration and operation of community networks
  • Deploy and manage network monitoring systems
  • Assist in the design and build of private cloud infrastructure
  • Manage our VPN systems for internal and external connectivity, and manage user credentials
  • Maintain our AWS environment and apps using Terraform and Ansible
  • Maintain our Kubernetes environment in AWS EKS
  • Assist in the design of core network infrastructure supporting our community networks
  • Manage our PKI tools and certificates
  • Ensure security and compliance for community networks, perimeter, and internal services 
  • Support end-users with workstation maintenance and compliance
  • Procure new machines and install, inspect, and maintain software as needed
  • Troubleshoot and resolve issues on our network as needed
  • Provide IT support on an ad hoc basis for our regional offices

About You

  • You are hungry, humble, and curious
  • You have excellent analytical, quantitative, and problem-solving skills
  • You want to solve extremely hard problems in the realms of data science, data engineering, and artificial intelligence 
  • You are an intrinsic self-starter and enjoy taking on new challenges
  • You want to be part of the early stages of a company where everything you build matters
  • You want a highly collaborative environment, comprised of diverse subject matter experts in broadband networking, data center infrastructure, software development, and design
  • You care about building critical intelligent infrastructure to increase community vitality and improve resilience 

About Us 

  • We offer bold solutions to drive positive, sustainable change 
  • We approach our work with humility, integrity, and intellectual rigor
  • We are agile and relentless in pursuit of our mission 
  • We have a mentality of service: to our mission, and to each other
  • We are dedicated to building and sustaining a culture of flourishing
  •  We optimize our efforts to create positive-sum for community and capital

Requirements

  • B.S. or M.S. in Computer Science or related field, or equivalent experience
  • 5+ years of domain experience in the field of DevOps or IT
  • 3+ years of experience in Network Engineering
  • Strong foundation in DevOps and Infrastructure-as-Code principles
  • Familiarity with Linux administration and tuning, and Linux networking
  • Experience setting up, running, and monitoring public-facing web servers
  • Experience configuration and maintaining VPNs
  • Experience with Docker container technology and Kubernetes container orchestration
  • Experience with one or more of Ansible, Chef, or Puppet
  • Proficiency in shell scripting such as Bash and at least one dynamic programming language such as Python, Ruby, or Perl
  • Familiarity with SDN and SD-WAN automation and orchestration
  • Working knowledge of source control systems, preferably Git
  • Familiarity with IT compliance areas such as change management, incident response, and configuration management
  • Vendor management experience, including initial and annual diligence and contract negotiation
  • Familiarity with access control, centralized systems management, inventory management or directory management such as LDAP/ActiveDirectory and SSO
  • Familiarity with network and connectivity troubleshooting from both a user endpoint perspective as well as server and networking infrastructure perspective
Share this job:
Solutions Architect
python data science machine learning api healthcare May 08
At Hyperscience, we use modern machine learning to turn documents into machine-readable data. Our customers receive a wide variety of documents, like life insurance applications, paystubs, utility bills, insurance claims, that must be processed quickly and accurately to better serve the people at these organizations, and their customers. Amazingly, this is all done manually today. We’re on a mission to change that! Our product is already delivering value to large, blue-chip organizations in financial services and insurance, and we see a massive opportunity to expand to more industries and automate more business processes. We are looking for people who are excited to help us build upon this foundation and vision.

While at first this may not seem like a priority for a machine learning company, we see a huge opportunity in great user experience. Machine learning itself is not a product. While it’s a fundamental piece of our technology, it still must live in an ecosystem and be used by people. Human-in-the-loop is a powerful feature that allows us to do things like process anything, provide feedback loops for generating accuracy reporting, and on-demand model re-training. Our product works with people, and for people.

We're looking to grow our Engineering Team with the addition of a Solutions Architect.

As a Solutions Architect You Will Be:

  • The technical point of contact for the Customer Experience team when deep technical questions or issues arise from installing or using our product.
  • Creatively troubleshooting and solving problems given details from customers even when the initial answer is unclear.
  • Managing around-the-world engineering resources to diagnose and fix high priority escalations across multiple time zones and locations, facilitating information flow between the Customer Experience team and engineering as necessary.
  • Participating in customer calls which require more technical expertise.
  • Involved in architecture discussions and decisions when implementing new product features.
  • Leveraging your systems knowledge to deliver fast and scalable software, starting from the design of the system through development and extension.
  • Helping improve our code quality by writing unit tests and performing thorough code reviews.
  • Designing easy-to-use programmer interfaces and tools that will be leveraged by other developers, including APIs for our clients' developers.

Desired Experience:

  • Degree in Computer Science or related engineering field, or equivalent practical experience.
  • Experience in building web-scale and/or enterprise-grade systems in different environments.
  • Strong ability to reason about data structures, complexity, and possible engineering approaches to a problem.
  • Experience with Python / Django is preferred, but experience with any mainstream language and framework is required.
  • Experience with distributed systems is a huge plus.
  • Experience with database systems, including SQL and/or NoSQL solutions is required.
  • Strong background in data science and mathematics is a plus.
  • Experience with customer service is a plus.
  • Experience with version control systems, preferably Git.
  • Experience troubleshooting remote systems in a customer-owned environment with limited access is a huge plus.

What You Will Achieve:

  • Within your first 30 days:
  • You will get acquainted and eventually be fully comfortable navigating the full codebase, the technology stack, the development processes and org structure within the company.
  • You will learn the product and will make your first significant, user-impacting contributions to one of our products.
  • You will get to know our ML domain, codebase, and practical applications.
  • You will begin to troubleshoot customer escalations as they arise.

  • Within your first quarter and beyond:
  • You will be an integral part of the team and a driven, focused self-starter who can navigate a certain amount of ambiguity, and who is not afraid to take a sizable chunk of functionality, analyze it, break it down, implement it and then assume ownership and responsibility over it.
  • You will be taking an active role in discussions about possible solutions, different approaches, API designs and more.
  • You will decide when to bring in additional engineering help as necessary to handle escalations as necessary and help facilitate information flow between CX and engineering.
  • You will contribute to shaping the way customer issues and requests are handled within Hyperscience.

Benefits & Perks

  • Top notch healthcare for you and your family
  • 30 days of paid leave annually to help nurture work-life symbiosis
  • A 100% 401(k) match for up to 6% of your annual salary
  • Stock Options
  • Paid gym membership
  • Pre-tax transportation and commuter benefits
  • 6 month parental leave (or double salary to pay for your partner's unpaid leave)
  • Free travel for any person accompanying a breastfeeding mother and her baby on a business trip
  • A child care and education stipend up to $3,000 per month, per child, under the age of 21 for a maximum of $6,000 per month total
  • Daily catered lunch, snacks, and drinks
  • Budget to attend conferences, train, and further your education
  • Relocation assistance
Hyperscience provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Hyperscience complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Share this job:

This Year

Data Scientist
python data science machine learning Apr 21
About Imperfect

Imperfect Foods was founded in 2015 with a mission to reduce food waste and build a better food system for everyone. We offer imperfect (yet delicious) produce, affordable pantry items, and quality meat and dairy. We deliver them conveniently to our customers’ doorsteps and pride ourselves on offering up to a 30% discount compared to grocery store prices. Our customers can get the healthy, seasonal produce they want alongside the grocery staples they rely on, without having to compromise their budget or values. We’re proving that doing the right thing for the planet doesn’t have to cost more, and that shopping for quality ingredients can support the people and resources that it takes to grow our favorite foods.

We're headquartered in San Francisco with operations all over the country. Check our website to see if there is an Imperfect near you!

We're looking for folks who are positive, motivated, and ready to change the world. If that sounds like you, drop us a line!

How we are protecting employees from COVID-19

At Imperfect Foods, employee health and safety is our top priority. We have implemented processes and precautions to prevent the spread of COVID-19 in our facilities. We provide gloves, masks, and hand sanitizer to all essential employees who must report to work. Before entering our warehouse, employees have their temperatures checked. In addition, we take great care to ensure frequently touched surfaces are sanitized throughout the day and all warehouses are fully sanitized weekly.

We have also implemented an Emergency Sick Leave policy providing full-time and part-time employees 2 additional weeks of paid time off and up to 26 weeks paid leave if they have a confirmed case of COVID-19.

About the Role:

Imperfect is looking for an experienced Data Scientist to join the Business Intelligence team. The role will develop and automate algorithms that integrate with various parts of the business to better our customer understanding, improve our customer experience, and drive operational efficiency. Our Data Scientist will collaborate with departments across the company, such as Marketing, Operations, and Engineering, for a wide and deep impact. Some example projects include demand forecasting to help with warehouse labor and inventory planning, optimizations to improve warehouse efficiency, personalization and recommendation algorithms to enable customers to discover relevant products, and product subscription cadence optimization. As an early member of the data science team, your role will influence the tech stack and frameworks we develop. This role requires a strong level of analytical horsepower and communication skills to effectively analyze and tell the story behind the data.

 If you like the idea of swimming in data, fighting food waste, and working with a bunch of pleasant people, come join us!

Responsibilities:

  • Build production grade models on large-scale datasets by utilizing advanced statistical modeling, machine learning, or data mining techniques
  • Provide data-driven analyses and insights for strategic and business initiatives, while maintaining analytics roadmap to prioritize initiatives, communicate timelines, and ensure successful, timely completion of deliverables
  • Collaborate with the teams across the company to identify impactful business problems and translate them into structured analyses, actionable insights, and reports and dashboards
  • Assist with the development and deployment of analytical tools and develop custom models to track key metrics, uncover insights in the data, and automate analyses
  • Contribute to code reviews and software development best practices
  • Effectively communicate with initiative stakeholders, including technical and non-technical audiences. Tell the story behind the data

Skills and Qualifications:

  • 3+ years of professional experience as a data scientist, including deploying and maintaining code in a production environment
  • Experience with machine learning techniques and advanced analytics (e.g. regression, classification, clustering, time series, econometrics, mathematical optimization)Advanced SQL and Python skills and running advanced analytics in a scripting language. Bonus: experience in R and other languages
  • A solid grasp of basic statistical applications and methods (experimentation, probabilities)
  • Experience working with large data sets and preparing them for analysis

About You:

  • You're able to clearly and effectively communicate the results of complex analyses and transform data into a story for different audiences and various stakeholders
  • You demonstrate intellectual curiosity, and a passion for translating information into actionable insights, with data big, small, structured, and messy
  • You have the insight to take ambiguous problems and solve them in a structured, hypothesis-driven, data-supported way
  • You're a self-starter with the ability to juggle multiple projects at once
  • You’re passionate about our mission to eliminate food waste and create a better food system for all

Details of the Position:

  • Full-time exempt position reporting to the Director of Business Intelligence
  • Candidate can be remotely located within the US
  • Salary and employee stock options commensurate with experienceCompetitive benefits package including health care, paid vacation, 401K, paid parental leave, and recurring credit towards your Imperfect account!

Physical Requirements:

  • Sedentary work; involves sitting most of the time
  • Occasional movement around the office may be necessary
  • Regular work with computers, including keyboards, mouses, and screens
  • Regular use of mobile devices, including smartphones and tablets
Individuals seeking employment at Imperfect Foods are considered without regard to race, color, religion, national origin, age, gender, marital status, ancestry, physical or mental disability, veteran status, or sexual orientation.

U.S. E-Verify Notice: Imperfect Foods participates in E-Verify in the United States. Imperfect will provide the U.S. Social Security Administration (SSA) and, if necessary, the U.S. Department of Homeland Security (DHS), with information from each new employee's Form I-9 to confirm work authorization.
Share this job:
Data Scientist
Auth0  
python data science machine learning nlp docker aws Apr 17
Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

The Data Scientist will help build, scale and maintain the entire data science platform. The ideal candidate will have a deep technical understanding, hands-on experience in building Machine Learning models coming up with valuable insights, and promoting a data-driven culture across the organization. They would not hesitate to wrangle data, understand the business objectives and have a good understanding of the entire data stack.This position plays a key role in data initiatives, analytics projects, and influencing key stakeholders with critical business insights. You should be passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source Data Science technologies.

RESPONSIBILITIES

  • Use Python and the vast array of AI/ML libraries to analyze data and build statistical models to solve specific business problems.
  • Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters.
  • Collaborate with researchers, software developers, and business leaders to define product requirements and provide analytical support.
  • Directly contribute to the design and development of automated selection systems.
  • Build customer-facing reporting tools to provide insights and metrics which track system performance.
  • Communicate verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations.

BASIC QUALIFICATIONS

  • Bachelor's degree in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field.
  • Proficient with data analysis and modeling software such as Spark, R, Python etc.
  • Proficient with using scripting language such as Python and data manipulation/analysis libraries such as Scikit-learn and Pandas for analyzing and modeling data.
  • Experienced in using multiple data science methodologies to solve complex business problems.
  • Experienced in handling large data sets using SQL and databases in a business environment.
  • Excellent verbal and written communication.
  • Strong troubleshooting and problem-solving skills.
  • Thrive in a fast-paced, innovative environment.

PREFERRED QUALIFICATIONS

  • Graduated with a Master's degree or PhD in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field.
  • 2+ years’ experience as a Data Scientist.
  • Fluency in a scripting or computing language (e.g. Python)· Superior verbal and written communication skills with the ability to effectively advocate technical solutions to scientists, engineering teams, and business audiences.
  • Experienced in writing academic-styled papers for presenting both the methodologies used and results for data science projects.
  • Demonstrable track record of dealing well with ambiguity, ability to self-motivate, prioritizing needs, and delivering results in a dynamic environment.
  • Combination of deep technical skills and business savvy to interface with all levels and disciplines within our and our customer’s organizations.

SKILLS AND ABILITIES

  • At least 3 years of relevant work experienceAbility to write, analyze, and debug SQL queries.
  • Exceptional Problem-solving and analytical skills.
  • Fluent in implementing logistic regression, random forest, XGBoost, bayesian and ARIMA in Python/RFamiliarity or experience with A/B testing and associated frameworksFamiliarity with Sentiment Analysis (NLP) and LSTM AI modelsExperience in full AI/ML life-cycle from model development, training, deployment, testing, refining and iterating.
  • Experience with or willingness to learn tools such as Tableau, Apache SuperSet, Looker or similar BI tools.
  • Knowledge of AWS Redshift, Snowflake or similar databasesFamiliarity with tools such as Airflow and Docker are a plus

PREFERRED LOCATIONS

  • #AR; #US;
Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

Join us on this journey to make developers more productive while making the internet safer!
Share this job:
Paid Research Study for Data Professionals
data-analysis sql data-science Apr 10

User Research International is a research company based out of Redmond, Washington. Working with some of the biggest companies in the industry, we aim to improve your experience via paid research studies. Whether it be the latest video game or productivity tools, we value your feedback and experience. We are currently conducting a research study called the Data Professional Study. We are looking for currently employed Full-Time Data Professionals who use tools such as SQL, R and Python. This study is a one-time Remote Study via an online meeting. We’re offering $200 for participation in this study. Session lengths are 90 mins. These studies provide a platform for our researchers to receive feedback for an existing or upcoming products or software. We have included the survey link for the study below. Taking the survey will help determine if you fit the profile requirements. Completing this survey does not guarantee you will be selected to participate.  If it's a match, we'll reach out with a formal confirmation and any additional details you may need.

I have summarized the study details below. In order to be considered, you must take the survey below. Thank you!

Study: Data Professional Study

Gratuity: $200

Session Length: 

Location: Remote

Dates: Available dates are located within the survey

Survey: Data Professional Study

Share this job:
Big Data Engineer
big data python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Big Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote)

You will:

  • Write jobs using PySpark to process billions of events per day
  • Fine tune existing Hadoop / Spark clusters
  • Rewrite some existing PIG jobs in PySpark

Key Qualifications

You have:

  • BS degree in Computer Science or related field
  • 7+ years of relevant work experience
  • Experience in building data pipelines at scale (Note: We process over 1 Trillion events per week)
  • Good knowledge of Hadoop / Spark /Apache Kafka, Python, AWS, PySpark and other tools in the Big Data ecosystem
  • Good programming skills – Python
  • Operation experience in the tuning of clusters for optimal data processing
  • Experience in building out ETL jobs at scale
  • Good knowledge of distributed system design and associated tradeoffs
  • Good knowledge of CI / CD and associated best practices
  • Familiarity with Docker-based development and orchestration

Bonus points awarded if you have:

  • Created automated / scalable infrastructure and pipelines for teams in the past
  • Contributed to the open source community (GitHub, Stack Overflow, blogging)
  • Prior experience with Spinnaker, Relational DBs, or KV Stores
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Sr. Data Engineer
golang python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Sr. Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This role is open to candidates in Bucharest, (Office or Remote) Cluj, Brasov and Isai (Remote).

What You’ll Need

  • BS degree in Computer Science or related field.
  • 7+ years of relevant work experience.
  • Good knowledge of some (or all) of AWS, Python, Golang , Kafka , Spark, Airflow, ECS, Kubernetes, etc to build infrastructure that can ingest and analyze billions of events per day.
  • Good knowledge of distributed system design and associated tradeoffs.
  • Good knowledge of CI / CD and associated best practices.
  • Familiarity with Docker-based development and orchestration.

Bonus points if you have…

  • Created automated / scalable infrastructure and pipelines for teams in the past.
  • Contributed to the open source community (GitHub, Stack Overflow, blogging).
  • Prior experience with Spinnaker, Relational DBs, or KV Stores.
  • Prior experience in the cybersecurity or intelligence fields.

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits + 401k plan (US only)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and snacks
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Senior Data Scientist, Experimentation
 
senior data science cloud testing Apr 07
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

We are looking for a Data Scientist, Experimentation to join our Core Data Science team.

The ideal candidate will be the authority on statistical methods and practices on experimentation with hands-on experience in development, and evangelism of correct measurement methodologies across experimentation practitioners. Candidate should also have excellent communication with ability to partner with various multi-functional teams.

In this role, you'll get to

  • Define technical vision & roadmap for experimentation at Atlassian
  • Leveraging existing industry-leading best practices for experimental design and statistical analysis including but not limited to multi-variate testing, causal inference techniques, bootstrapping & bayesian approaches
  • Drive the research direction in developing methods and tools that increase the rigor and efficiency of our experimentation platform
  • Consult product analysts & data scientists on complicated experimental designs and on analyses estimating causal effects
  • Partner with multi-functional teams such as engineering, product management, measurement platform, and various data teams to develop standard practices

On the first day, we'll expect you to have

  • M.S. or Ph.D. in Statistics, Mathematics, Physics or related field with related coursework in Statistics
  • 5+ years of industry experience running and analyzing behavioral experiments
  • Statistical intuition and knowledge of various hypothesis testing and regression approaches
  • Background in at least one programming language (e.g., R, Python)
  • Strong verbal and written communication, and presentation skills
  • Able to partner with different functional teams, form relationships, and influence others to get work done
More about our team

The Growth & Core Data Science team is passionate about making our customers successful, starting with an intimate understanding of their product portfolio and user behavior. We are responsible for guiding Atlassian through a multi-year initiative to build and promote growth across our Cloud products to meet the needs all customers and prospects. Critically, we collaborate with a wide range of teams in Atlassian to ultimately drive action and make customers widely successful with our products. We have a deeply-ingrained experimentation culture: every day our online experimentation system powers experiments that quantitively inform our product decision making, and our product engineers and designers are well educated in good experimentation practices. 



More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Site Reliability Engineer
RStudio  
python data science cloud azure Apr 02

RStudio creates great software that helps people understand data and make better decisions in real world applications. Our core is an open source data science toolchain and we aim to make it available to everyone, regardless of their economic means.

We are seeking our next Site Reliability Engineer. As a member of our team you will have the opportunity to work on a variety of different projects including our cloud marketplace offerings, hosted applications, and internal infrastructure, which will greatly impact our end-users and employees. You will be given a large amount of autonomy to determine the right tool for the right job. This is a cross functional role and you will partner closely with the engineering, support and solutions teams.

What you may work on

  • You will design operational processes and solutions to proactively address issues before they become customer facing
  • Identify persistent or recurring problems and recommend creative solutions
  • Build tools and solutions for bridging software development teams with system infrastructure

About you

  • 4+ years experience with a high-level scripting language such as Python or Ruby
  • 3+ years advanced-level experience with Linux
  • 2+ years experience with Amazon Web Services
  • You have experience designing and implementing elastic solutions while ensuring no single point of failure
  • Familiarity with system scalability, monitoring, and performance with the ability to troubleshoot systems, network, and storage
  • Proficiency with automation tools, such as Chef or Puppet, in a production environment
  • Solid understanding of the challenges with creating, scaling, and managing distributed applications and service
  • Educate, train, and coach the engineering and solutions engineering teams in best practices
  • Effectively use tools and techniques to maximize impact on scaling services and systems
  • Ability to work independently on a number of projects with employees across teams
  • Experience with Kubernetes, ceph, Azure, Windows administration a plus

About us

  • We welcome all talented engineers and are committed to a culture that represents diversity in all its forms.
  • We prioritize giving engineers “focus time” to get deep work done. We minimize meetings and attempt to operate asynchronously.
  • We are a learning organization and take mentorship and career growth seriously. We hope to learn from you and we anticipate that you will also deepen your skills, influence, and leadership as a result of working at RStudio.
  • We operate under a unique sustainable business model: 50% of engineering we do at RStudio is open source. We are profitable and we plan to be around twenty years from now.

Notable

  • 100% distributed team (or come in to one of our offices in Seattle or Boston) with minimal travel
  • Competitive compensation with great benefits including:
  • medical/dental/vision insurance (100% of premiums covered)
  • 401k matching
  • a home office allowance or reimbursement for a coworking space
  • a profit-sharing system
  • Flexible environment with a generous vacation policy

RStudio is committed to being a diverse and inclusive workplace. We encourage applicants of different backgrounds, cultures, genders, experiences, abilities, and perspectives to apply. All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, sexual orientation, gender, gender identity, age, physical disability, or length of time spent unemployed.

Share this job:
Product Data Science
Loom  
data science python healthcare Mar 19
About Loom
Loom is a more effective way to communicate in the workplace that's already trusted by more than 1.8M users across more than 50k companies. Our customers are global and use Loom at work at world-class companies, including HubSpot, Square, Uber, GrubHub, and LinkedIn.

Founded in 2016, Loom has raised $45 million from top-tier investors including Kleiner Perkins, Sequoia, and the founders of Instagram, Figma and Front.

The role
You will be collaborating with stakeholders across the company to drive key decisions and initiatives, such as go to market and product strategy. We’re looking for passionate people to drive actionable insights and help build our analytics infrastructure. 

You will:

  • Design and develop core business metrics (OKRs), create insightful automated dashboards and data visualization to track progress
  • Partner with product managers, engineers, marketers, designers, and business operations to translate business insights into decisions and action
  • Design and analyze product experiments; communicate results and learnings to the rest of the team
  • Automate analyses and data pipelines while building scalable data infrastructure

You could be a good fit if you have:

  • 2-4 years of experience in a data science or analytics role
  • Proficiency in SQL - able to write structured and efficient queries on large data sets
  • Strong communication skills with the ability to synthesize insights into compelling stories
  • Good product and business sense
  • Experience with ETL tools such as Airflow or DBT
  • Experience designing, running, and analyzing A/B tests
  • Working knowledge of Unix command line and git is a good to have
  • Proficiency in R and/or Python is also a good to have
Perks at Loom
* Competitive compensation and equity package
* 99% company paid medical, dental, and vision coverage for employees and dependents (for US employees)
* Flexible Spending Account (FSA) and Dependent Care Flexible Spending Account (DCFSA)
* Healthcare reimbursement (for International employees) 
* Life insurance
* Long-term disability insurance
* 401(k) with 5% company matching
* Professional development reimbursement
* Mental health and wellness reimbursement
* Gym reimbursement
* Unlimited PTO 
* Paid parental leave
* Remote work opportunities 
* Yearly off-site retreats (this year was in Barbados)
 
SF office perks
* Daily lunch on-site
* Unlimited snacks & drinks
* Remote week every other month

Remote-specific perks
* Home office & technology reimbursement
* Co-working space reimbursement 
* New-Hire on-boarding in San Francisco (optional)

Loom is an equal opportunity employer.
We are actively seeking to create a diverse work environment because teams are stronger with different perspectives and experiences.

We value a diverse workplace and encourage women, people of color, LGBTQIA individuals, people with disabilities, members of ethnic minorities, foreign-born residents, older members of society, and others from minority groups and diverse backgrounds to apply. We do not discriminate on the basis of race, gender, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status. All employees and contractors of Loom are responsible for maintaining a work culture free from discrimination and harassment by treating others with kindness and respect.
Share this job:
Full Stack Engineer
full stack python ruby data science machine learning frontend Mar 18

About Triplebyte

Triplebyte is transforming the way software engineers are hired. Our mission is to build an open, valuable and skills-based credential for all engineers. This is important because millions of people have skills (and deserve good jobs), but don’t fit the profile that recruiters seek. Another way of saying this is that talent is uniformly distributed, but opportunity is not. Our goal is to broaden the distribution of opportunity.

To do this, we have built a background-blind technical assessment and interview process, and we use it to find engineers and help them get jobs at 450+ top companies. Our rich understanding of candidates’ skills and propriety machine learning models enable us to find the right match between our candidates and partner companies. This is why companies like Apple, Dropbox and American Express trust Triplebyte’s technical assessment to identify the best engineers for their open roles and reduce the time and effort it takes to hire them.

We just raised a $35 million Series B and our team of 65 is growing quickly! Now is a great time to join as we're on an exciting growth trajectory. You will have lots of opportunities for taking on responsibility and developing new skills quickly.

We're an experienced team, the founders have each built and sold companies before. Ammon and Guillaume founded Socialcam (acquired by Autodesk for $60 million) and Harj was the first partner hired at Y Combinator since its founding.

We are rapidly growing our engineering team, we are looking for generalist, full-stack, frontend, backend, machine learning, and dev-ops engineers!

Building the best product

The Triplebyte engineering team is still rather small, only 8 people. We all went through the Triplebyte process :) We move fast, release new features daily and iterate quickly. Triplebyte is growing very quickly and the engineering team is fully dedicated to supporting that growth, in any way we can. We are a generalist engineering team, we work on anything that helps the company or other teams grow. We cycle through backend, full-stack and frontend work based on the most critical needs. All of us are encouraged to work on all those parts.

Our frontend is mostly in React/Redux. Our backend is in Ruby on Rails, Postgres and Redis. (We also use python with Tensorflow for all our data science work)

It doesn't seem like it, but we have built a LOT of software. We are a truly full-stack company and we are building a process that needs to be perfect ends to ends. We have software for engineers, for interviewers, for writers, for companies, for us, etc.. If that's any indication of scale, we crossed recently the 200 tables in our Postgres database :)

Join us and help us build the best product! We value initiative, productivity, and ownership.

Compensation and Benefits

  • Competitive salary and stock options package
  • Open vacation policy
  • Employer paid health, vision and dental insurance
  • 401(k) plan with matching
  • Pre-tax commuter benefits
  • Daily catered lunches

Our Mission

We believe strongly in building a truly meritocratic, unbiased process for finding great talent. Even the best technology companies today still use where people went to college as a proxy for intelligence and ability. We're building a process that looks only at ability, not credentials, so we can have a future where everyone can focus on just learning and being good at what they do, not how they look on paper.

Every aspect of running a company has been improved over the last decade, except hiring. Most decisions are still made using amorphous terms like "gut feel" or "culture fit". They should be made using crisp data. Only a company specializing on this problem, using data collected from the hiring process at hundreds of companies, can solve it. That's the company we're building. Our mission is creating a scientific method for identifying great talent and intelligently routing it to the best place. Starting with software engineers.

The Company is an equal opportunity employer and makes employment decisions on the basis of merit and business needs. The Company does not discriminate against employees or applicants (in any aspect of employment, including, but not limited to recruiting and hiring, job assignment, compensation, opportunities for advancement, promotion, transfers, evaluation, benefits, training, discipline, and termination), on the basis of any characteristic protected under applicable federal, state, or local laws.

Share this job:
Senior Data Scientist / Backend Engineer
komoot  
aws python data-science machine-learning kotlin backend Mar 16

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and we are consistently ranked amongst the highest-grossing apps in both Google Play and the App Store.

To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services.

With over 9 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience. We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.

Your key responsibilities

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like pandas, numpy, Jupyter Notebooks, seaborn, scikit-learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Send us the following:

  • Your CV in English
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Data Engineer - Lead Subject Matter Expert
data science big data Mar 13
Role:

At Springboard, we are on a mission to bridge the skills gap by delivering high-quality, affordable education for new economy skills. We’ve already launched hundreds of our students into Data Science careers through our top-rated Data Science course that pairs students with an industry mentor and offers them a Job Guarantee.  

Now we’re expanding our Data Science course offerings, and we’re looking for an expert who has a strong background in Data Engineering to help us build a new Data Engineering course in the coming months. This is a unique opportunity to put your expertise into action to educate the next generation of Data Engineers and increase your domain mastery through teaching. 

The course will be an online 6-to-9 month program designed to help students find a job within 6 months of completion. You’ll set the vision to ensure we’re teaching all that is needed to succeed as a Data Engineer. Your work will include creating projects and other materials to define students’ learning experiences.  

This role will be a part-time contract role for a duration of 3~4 months (starting immediately) with potential for ongoing consulting work. We estimate a workload of roughly 15-20 hours/ week. You can work with us out of our office in San Francisco or remotely. This is a paid engagement. 

Responsibilities:

You’ll work with our curriculum development team to create a Data Engineering course.

As part of this role, you will

  • Set the vision for effectively teaching key data engineering concepts and skills
  • Define learning objectives and course structure (units and projects)
  • Collaborate with the instructional designers and other subject matter experts to build the full curriculum. This includes:
  • Designing, writing, and building course projects (and associated resources)
  • Curating high-quality resources (videos, articles) that effectively teach course topics 
  • Writing descriptions that summarize and explain the importance of each topic covered in the course
  • Create rubrics for mentors to evaluate student work (especially course projects)

Experience

  • Currently working as a Data Engineer in the U.S. for 3+ years including experience with data warehousing, ETL, big data systems, data modeling and schema design, and owning data quality.
  • 1+ years of  experience hiring and/or managing Data Engineers
  • Passion for teaching. Previous teaching experience is a huge bonus.

Skills

  • Understanding of Data Engineering landscape and how the field varies across companies
  • Ability to identify the tools and industry practices students need to learn to successfully become Data Engineers
  • Clear point-of-view on what skills are needed for an entry level Data Engineer role and how to teach them in a structured manner 
  • Proven ability to create projects with clear instructions and documentation
  • Excellent verbal & written communication skills

You are

  • Able to work independently and produce high-quality work without extensive supervision 
  • Diligent about meeting deadlines
  • A collaborator working efficiently with a diverse group of individuals
  • Receptive and responsive to feedback and are willing to iterate on work
  • Passionate about education

Availability

  • 15-20 hours of work/week for 3-4 months, starting immediately
  • Must be available to connect synchronously during PST working hours on weekdays
  • Can be remote or work from our SF office
Share this job:
Manager, Product Design
Auth0  
product design data science Mar 13
Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

As a Product Design Manager you will work collaboratively with our design, product and engineering teams to build solutions for Auth0’s Identity and Access Management platform. You’ll join a diverse design team from around the world in a fully remote company to simplify the complex world of building authentication and authorization solutions for web, mobile, and legacy applications so our customers can focus on their core business.

Your Role And Impact
We're looking for a design lead for the team that builds and operates our Identity and Access Management products. As the design lead, you’ll report to the Head of Design and have influence over the customer experience across all of the IAM domain and customer touchpoints. You will lead a team of talented designers, and you will work within the support of the Auth0 product delivery organization while also rethinking the experience that we see today.
Your team will include product designers and subject matter experts and you will provide direction and guidance of their output, facilitating cross-functional partnerships and design process. You will promote a world-class design practice, influence product strategy, and ensure product execution. As a Design Lead, you'll provide mentorship and champion the craft and career development of your team, while creating a safe haven so designers can accomplish their best work. At times you will be hands-on with the design work, pitching in on smaller project execution to support your team to do deep focused work, however, this is primarily a leadership and management role.

Who we’re looking for:

Responsibilities:

  • Manage and inspire a team of product designers providing direction and pushing the quality of the work
  • Influence product strategy and contextualize high-level decisions and direction for your team.
  • Build relationships with cross-functional team members from Engineering, Product, Program Management, and Data Science and speak to product strategy, design goals, and execution.
  • Manage designers who will continue to push the bar of excellence for the team
  • Advocate for user research for the needs of the product and meet with customers to gain first-hand experience
  • Partner with the product manager or equivalent stakeholder, speaking regularly with customers and bringing details of customer insight
  • Can work effectively with a junior product /marketing manager or someone new to the company
  • Helps other managers see things through a business & impact lens
  • Inspires their team to think about the impact they're making in customer's lives
  • Build a year long technical vision that's intertwined with the product vision and the whole team can articulate it.
  • Able to articulate the biggest business drivers in terms of customer segment, return on investment, degree of product maturity, etc.

Requirements

  • 6-10 years experience in a fast-paced and dynamic cross-functional environment
  • Mastery of product design, user interface and interaction design, visual design, storytelling, and prototyping.
  • Can identify larger scope issues and play a leadership role with cross-functional partners to develop creative solutions for bringing new products to market.
  • Able to facilitate creative workshops that drive solutions for large scope issues.
  • Proven record of acting as a leader across multiple teams through the application of design thinking, craft, customer focus, and product contributions.
  • Experience managing widely distributed teams (geography and time)
  • Able to develop and communicate a cohesive strategy for how all products in your area work together to advance a long term product vision and add customer value.
  • Can consistently demonstrate an insightful understanding of customer needs, the competitive landscape, emerging technology, and market trends and set an example of delivering high-value work for customers.
  • Experience working across multiple product areas and uniting stakeholders
  • Experience running design teams, and collaborating effectively with engineering managers, product leaders and researchers
  • Experience with the creation of design systems and componentization
  • Consistent track record of coaching talented designers to achieve their best performance
  • Experience directing both the visual and interaction design of projects, across both web and mobile platforms
  • A diverse portfolio (including B2B & B2C projects), across web and mobile, that reflect critical thinking and design excellence
  • Strong communication, storytelling and presentation with confidence
  • Comfort in a fast-paced, highly-dynamic environment with multiple stakeholders
  • Clear and precise writing and communication skills
  • Embrace and encourage diverse perspectives to inspire product thinking
  • Business acumen

Preferred Locations:

  • #US;
Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

Join us on this journey to make developers more productive while making the internet safer!
Share this job:
Backend Engineer, Data Processing Rust
backend java data science machine learning big data linux Mar 13
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the Role

This is a fully remote role, we will consider applicants based in North America, South America and Europe

Our Engineering team is having a blast while delivering the most sophisticated crypto-trading platform out there. Help us continue to define and lead the industry.

As part of Kraken's Backend Data team, you will work within a world-class team of engineers building Kraken's infrastructure using Rust. As a Backend Engineer in Data Processing, you help design and build Fraud and Security detection systems leveraging Big data pipelines, Machine Learning and Rust.

Responsibilities:

  • Design and implementation of micro-services in Rust
  • Writing reusable, testable, and efficient code
  • Implementation of risk evaluation and anti-fraud systems, or similar scoring and anomaly detection systems
  • Pick and design adequate data processing storage and pipelines
  • Work with our Fraud/Data Science team or provide the Data Science know-how to support Product requirements

Requirements:

  • At least 5 years of experience in software engineering
  • Experience with Rust
  • Experience writing network services or asynchronous code
  • Python, Java or similar work experience
  • Working knowledge using Kafka, Pulsar or similar
  • Experience using a Linux server environment
  • Ability to independently debug problems involving the network and operating system

A strong candidate will also:

  • Be familiar with deployment using Docker
  • Have previous work experience on Risk scoring or anomaly detection systems
  • Have experience with Machine Learning and its ecosystem
  • Have experience with other strongly typed programming languages
  • Have experience using SQL and distributed data solutions like Spark, Hadoop or Druid
  • Be passionate about secure, reliable and fast software
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Full stack engineer - GH
Dataiku  
full stack java python javascript data science postgresql Mar 13
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



As a full stack developer in the Dataiku engineering team, focusing on Governance Hub (GH), you will play a crucial role in helping us bootstrapping this new product.

GH is an application that monitor and manage data initiatives across the various departments of a company. It ensures that good practices and governance rules are enforced. It is fully customizable to fit policies and processes of the company.

Our backend is mainly written in Java and the storage layer uses PostgreSQL 12. Our frontend is based on the latest version of Angular.

One of the most unique characteristics of GH is its fully-custom model and a strong integration with numerous external tools (project management, visualization, data science, etc.). This makes it interesting and challenging to design and develop such a software.

This is a full-time position, based in France either in our Paris office or remote.

RESPONSIBILITES

  • Turn ideas or simplistic specifications into full-fledged product features, including unit and end-to-end tests.
  • Tackle complex problems that range from performance and scalability to usability, so that complicated machineries look straightforward and simple to use for our users.
  • Help your coworkers: review code, spread your technical expertise, improve our tool chain
  • Bring your energy to the team!

You are the ideal recruit if:

  • You are mastering a programming language (Java, C#, Python, Javascript, You-name-it, ...).
  • You know that low-level Java code and slick web applications in Javascript are two sides of the same coin and are eager to use both.
  • You are not surprised that “Math.max()<Math.min()” is true in JS
  • You have a first experience (either professional or personal) building a real product.

Hiring process:

  • Initial call with the talent acquisition manager
  • On-site meeting (or video call) with the hiring manager
  • Home test to show your skills
  • Final on-site interviews
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

Responsibilities

  • Providing technical solutions and responding to technical requests from customers through our different channels: mail, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customersImprove efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams

Requirements

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational database and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface both with technical and non-technical individuals as needed

Nice to haves...

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning

Benefits

  • Opportunity to join Dataiku early on and help scale the company
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to Paris (our European HQ)
  • Opportunity to work with a smart, passionate and driven team
  • Dataiku has a strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

We are looking for someone in the US to help with providing world-class support to our Federal customer base. In particular, this position will require the individual to be either a US citizen or qualified green card holder. Clearance is not necessary but would be a plus.

Responsibilities:

  • Providing technical solutions and responding to technical requests from customers through our different channels: email, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customers
  • Improve efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices 
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams
  • Providing support to some of our largest, most challenging Federal and Enterprise accounts

Requirements:

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational databases and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface with both technical and non-technical individuals as needed
  • US citizen or green card holder

Bonus Points:

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning
  • Experience or proven track record working with Federal clients

Benefits:

  • Opportunity to join Dataiku at an early stage and help scale the Support organization
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to our different offices (Paris, NYC, etc.)
  • Opportunity to work with a smart, passionate, and driven team
  • Startup atmosphere: Free foods and drinks, foosball/FIFA/ping pong, company happy hours and team days, and more
  • Strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Python Backend Developer
python postgresql docker data-science rest backend Mar 10

Principle Roles & Responsibilities

  • Design and build robust, scalable and secure backend services and APIs
  • Work on state-of-the art data science applications for the biotech industry, using modern software development techniques
  • Implement new functionalities, in close cooperation with our data scientists, frontend developers and UX/UI experts
  • Design and discuss implementation options and considerations together with the rest of the development team
  • Work closely with technical product owner, Scrum team and other stakeholders to ensure constant team alignment and continuous improvement
  • Draft and communicate architectural decisions together with the team and evaluate new technologies or products
  • Drive innovation and exchange knowledge with other developers and DevOps experts to optimize software development and delivery processes

Background & Qualifications

  • At least 3 years of hands-on experience with any kind of backend development (Python, Django, Docker)
  • A good understanding of client/server communication methodologies (REST, WebSocket), and database communication and architecture (PostgreSQL, Django ORM)
  • Experience with libraries used for data science applications, like numpy, scipy, sklearn or similar
  • Practical experience with software development basics like source code management systems and issue tracking
  • You’re enthusiastic about code quality, simplicity, security and performance
  • You are proficient in professional English to engage in deep technical discussions with international colleagues
  • And most important: You are eager to learn new things
Share this job:
NodeJS Software Engineer
node-js api backend javascript data science saas Mar 07

Our homes are our most valuable asset and also the most difficult to buy and sell. Knock is on a mission to make trading in your house as simple and certain as trading in your car. Started by founding team members of Trulia.com (NYSE: TRLA, acquired by Zillow for $3.5B), Knock is an online home trade-in platform that uses data science to price homes accurately, technology to sell them quickly and a dedicated team of professionals to guide you every step of the way. We share the same top-tier investors as iconic brands like Netflix, Tivo, Match, HomeAway and Houzz.


We are seeking a passionate Backend Software Engineer to help us design and build our customer-facing APIs and backend services. You must be a developer with a keen sense of good system design and application architecture. We are looking for someone who is passionate about creating great products and making the world amazing for homebuyers.


At Knock, we have fun, we move fast, we support and celebrate our fellow teammates, and we live by our POPSICLE values.


As a NodeJS Software Engineer on the backend team you will: 

  • Design, build and maintain APIs and tools that power Knock’s internal and customer-facing applications. Communicate your designs to fellow engineers.
  • Understand the data that power our applications, and be able to propose appropriate data models for new features.
  • Build new services from scratch, as well as maintain existing applications.
  • Provide secure and seamless integration of Knock’s applications to our SaaS partners.
  • Committed to good engineering practice of testing, logging, alerting and deployment processes.

We’re looking for Knockstars who have: 

  • Must be U.S. based.
  • BS in Computer Science or equivalent experience.
  • 3+ years of full lifecycle software development experience in Node.js, including coding, testing, troubleshooting, and deployment.
  • Strong desire to work at a rapidly growing startup and make it a success. Comfortable learning new technologies and tools.
  • Expert in Javascript (Node.js). Experience with various Node.js modules such as fastify, hapi.js, lodash, async.
  • Experience with AWS ecosystem such as Lambda, S3, SNS, SQS, and CloudWatch.
  • Strong SQL knowledge (Mysql or Postgres), familiarity with techniques to identify slow queries and debugging.
  • A strong customer-first mindset and data-driven approach to their work
  • Proven success working remotely in prior positions & are experienced working with a distributed, national team 

Bonus points for knowledge of:

  • Docker ecosystem and container orchestration systems such as Amazon ECS or Kubernetes.

What We Can Offer You:

  • An amazing opportunity to be an integral part of building the next multi-billion dollar consumer brand around the single largest purchase of our lives.
  • Talented, passionate and mission-driven peers disrupting the status quo.
  • Competitive cash, full medical, dental, vision benefits, 401k, flexible work schedule, unlimited vacation (2 weeks mandatory) and sick time.
  • Flexibility to live and work anywhere within the United States. As we are a distributed company and engineering team, we are open to any U.S. location for this role.
  • This is a 100% remote, full-time career at Knock.

We have offices in New York, San Francisco, Atlanta, Charlotte, Raleigh, Dallas-Fort Worth, Phoenix, and Denver with more on the way. In fact, we are proud to be a distributed company with employees in 21 different states. This is an amazing opportunity to be an integral part of building a multi-billion dollar consumer brand in an industry that is long overdue for a new way of doing things. You will be working with a passionate, mission-driven team that is disrupting the status quo.


Knock is an Equal Opportunity Employer.


Please no recruitment firm or agency inquiries, you will not receive a reply from us.

Share this job:
Project Management Curriculum Writer
project-management agile kanban data science big data cloud Feb 22

Project Management Curriculum Writer

  • Education
  • Remote
  • Contract

Who We Are Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. We provide 1-on-1 learning through our network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development, data science, and design, with in-person communities in up-and-coming tech hubs around the U.S. To join the Thinkful network visit thinkful.com.

Job Description Thinkful is launching a new Technical Project Management program which aims to be the best in-class remote, part-time Technical Project Management program offered today. As part of this effort, we're looking for a Technical Project Management subject matter expert to join us in executing on our content roadmap for this exciting new program. You will be creating the backbone of a new program that propels people from a background in academia and the sciences into an impactful career as Technical Project Manager. You'll produce written content, lesson plans including instructor notes and student activity descriptions, presentation decks, assessments, learning objectives and written content, all to support our students as they learn the core skills of data science. Your work product will be extremely impactful, as it forms the core asset around which the daily experience of our students will revolve. 

Responsibilities

  • Consistently deliver content that meets spec and is on time to support our program launch roadmap.
  • Create daily lesson plans consisting of 
  • Presentation decks that instructors use to lecture students on a given learning objective.
  • Instructor notes that instructors use alongside 
  • Activity descriptions — these are notes describing tasks students complete together in order to advance the learning objective in a given lecture.
  • Creates curriculum checkpoint content on specific learning objectives. In addition to the in-class experience, our students also spend time reading and completing tasks for a written curriculum hosted on the Thinkful platform. 
  • Creates code assets where necessary to support lesson plans, student activities, and written curriculum content.
  • Iterates on deliverables based on user feedback

Requirements

  • 3+ years of hands-on Technical Project Management industry experience 
  • Demonstrated subject matter expert in Technical Project Management 
  • Managing projects using Agile, Kanban and six Sigma methodologies
  • Work on multiple projects, all complexity levels, in an environment with changing priorities
  • Change management expertise 
  • Web application development experience 
  • Running large scale big data projects and or AWS cloud based projects
  • Collaborative.You enjoy partnering with people and have excellent project management skills and follow through
  • Excellent writing skills. You've got a gift for writing about complicated concepts in a beginner-friendly way. You can produce high-quality prose as well as high-quality presentations.

Compensation and Benefit

  • Contract position with a collaborative team
  • Ability to work remotely with flexible hours 
  • Access to all available course curriculum for personal use
  • Membership to a global community of over 500 Software Engineers, Developers, and Data Scientists who, like you, want to keep their skills sharp and help learners break into the industry
Share this job:
Data Scientist, Healthcare Policy Research
r python machine-learning healthcare data science machine learning Feb 19

We are looking for data scientists with policy research experience to perform data processing and analysis tasks, such as monitoring data quality, applying statistical and data science methods, and creating data visualizations. In this role you will work on multi-disciplinary teams supporting program evaluation and data analytics to inform policy and decision makers.

Responsibilities

  • Answering research questions or building solutions that involve linking health or healthcare data to other administrative data.
  • Designing, planning, and implementing the data science workflow on tasks and projects, involving descriptive statistics, machine learning or statistical analysis, data visualizations, and diagnostics using programming languages such as R or Python
  • Communicating results to collaborative project teams using data visualizations and presentations via tools such as notebooks (e.g. Jupyter) or interactive BI dashboards
  • Developing and maintaining documentation using Atlassian Confluence and Jira
  • Implementing quality assurance practices such as version control and testing

Requirements 

  • Master’s degree in Statistics, Data Science, Math, Computer Science, Social Science, or related field of study
  • Eight (8) years of experience 
  • Demonstrable enthusiasm for applying data science and statistics to social impact projects in academic, extra-curricular, and/or professional settings
  • Demonstrable skills in R or Python to manipulate data, conduct analyses, and create data visualizations
  • Ability to version code using GitExperience with healthcare claims and administrative data
  • Ability and desire to work independently as part of remote, interdisciplinary teams
  • Strong oral and written communication skills
Share this job:
Cloud Architect for Enterprise AI - Remote
Dataiku  
cloud data science big data linux aws azure Feb 18
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Enterprise AI Platform (Dataiku DSS)  to an ever growing customer base. 

As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build and run their Data Science and AI Enterprise Platforms.

This role requires adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines,  and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.

The position can be based remotely.

Responsibilities

  • Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
  • Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
  • Deploy Dataiku DSS in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
  • Design and build reference architectures, howtos, scripts and various helpers  to make the deployment and maintenance of Dataiku DSS smooth and easy
  • Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
  • Provide advanced support for strategic customers on deployment and scalability issues
  • Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
  • Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform

Requirements

  • Strong Linux system administration experience
  • Grit when faced with technical issues. You don’t rest until you understand why it does not work.
  • Comfort and confidence in client-facing interactions
  • Ability to work both pre and post sale
  • Experience with cloud based services like AWS, Azure and GCP
  • Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
  • Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
  • Some experience with Python
  • Familiarity with Ansible or other application deployment tools

Bonus points for any of these

  • Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
  • Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
  • Some knowledge in data science and/or machine learning
  • Some knowledge of Java

Benefits

  • Work on the newest, best, big data technologies for a unicorn startup
  • Consult on AI infrastructure for some of the largest companies in the world
  • Equity
  • Opportunity for international exchange to another Dataiku office
  • Attend and present at big data conferences
  • Startup atmosphere: Free foods and drinks, international atmosphere, general good times and friendly people


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Python Engineer
python cython tensorflow keras pytorch c Feb 17

Description

We are looking for a Python-focused software engineer to build and enhance our existing APIs and integrations with the Scientific Python ecosystem. TileDB’s Python API (https://github.com/TileDB-Inc/TileDB-Py) wraps the TileDB core C API, and integrates closely with NumPy to provide zero-copy data access. You will build and enhance the Python API through interfacing with the core library; build new integrations with data science, scientific, and machine learning libraries; and engage with the community and customers to create value through the use of TileDB.

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with TileDB, the TileDB-Py API and the TileDB-Dask integration. After 30 days, you will be fully integrated in our team. You’ll be an active contributor and maintainer of the TileDB-Py project, and ready to start designing and implementing new features, as well as engaging with the Python and Data Science community.

Requirements

  • 5+ years of experience as a software engineer
  • Expertise in Python and experience with NumPy
  • Experience interfacing with the CPython API, and Cython or pybind11
  • Experience with Python packaging, including binary distribution
  • Experience with C, C++, Rust, or a similar systems-level language
  • Distributed computation with Dask, Spark, or similar distributed computation system
  • Experience with a machine learning library (e.g. scikit-learn, TensorFlow, Keras, PyTorch, Theano)
  • Experience with Amazon Web Services or a similar cloud platform
  • Experience with dataframe-focused systems (e.g. Arrow, Pandas, data.frame, Vaex)
  • Experience with technical data formats such as (e.g. Parquet, HDF5, VCF, DICOM, GeoTIFF)
  • Experience with other technical computing systems (e.g. R, MATLAB, Julia)

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Data Infrastructure Engineer
Tesorio  
data science machine learning finance Feb 14
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Infrastructure Engineer to work on our Data Science team.

Company Overview

Tesorio is a high-growth, early-stage startup backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights).

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo.
  • Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Senior Software Engineer, Test Infrastructure
senior javascript data science machine learning docker testing Feb 13
About Labelbox
Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Andreessen Horowitz, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

As a Senior Software Engineer in Testing Infrastructure you will be responsible for building and maintaining our testing and automation infrastructure, test frameworks, tools, and documentation. At Labelbox engineers are responsible for writing automated tests for their features, and it will be your responsibility to build reliable infrastructure to support their efforts. 

Responsibilities

  • Design, implement and maintain reliable testing infrastructure for unit testing, component testing, integration testing, E2E API and UI testing, and load testing
  • Build and maintain reliable testing environments for our integration, E2E and load testing jobs
  • Integrate our testing infrastructure with our CI/CD pipeline to ensure automated kickoff of tests
  • Guide our engineering team on testing best practices and monitor the reliability and stability of our testing suite
  • When implementing new testing infrastructure and/or adopting new tools, write sample tests and documentation for our engineering team to hit the ground running with the new infrastructure

Requirements

  • 5+ years of experience developing testing infrastructure for web applications in a production environment
  • Experience with web technologies including: React, Redux, Javascript, Typescript, GraphQL, Node, REST, SQL
  • Experience with Unit Testing frameworks such as Jest, Mocha, and/or Jasmine
  • Experience with E2E UI test frameworks such as Cypress, Selenium, and/or Puppeteer
  • Experience writing E2E API tests with frameworks such as Cypress and/or Postman/Newman
  • Experience with Load Testing frameworks such as OctoPerf, JMeter, and/or Gatling
  • Experience integrating with CI/CD platforms and tools such as Codefresh, CircleCI, TravisCI, or Jenkins and Bazel
  • Experience integrating tools to measure code coverage across the different types of testing
  • Experience with Docker and Kubernetes
  • Experience with GraphQL and building testing infrastructure around it
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Data Scientist
python data science Feb 05

What is Pathrise?

Pathrise (YC W18) is an online program for tech professionals that provides 1-on-1 mentorship, training and advice to help you land your next job. On top of that, we're built around aligned incentives. You only pay if you succeed in getting hired and start work at a high-paying job first.

Everyday we are expanding our team and our services. We are looking for sharp, scrappy and fun individuals who are ready to jump in (head first) into a new role with us. We are a small team and we love working together to improve our fellows chances of getting the job of their dreams! If this sounds like something you'd be interested in we want to talk to you.

Our Mission

We seek to uplift job seekers in their careers and help them fulfill their hopes, ambitions and livelihoods. Read more about why we’re driven to do this in our manifesto.

In this role, you will create a framework for how we utilize our own data. If you are someone comfortable with qualitative data and can see the amazing potential we have to be a forerunner in this new job seekers market then this could be the perfect role for you.

In order to be effective in this role, you must have a genuine interest in education and technology. Since you will be involved in all phases of coursework from research, development, design and feedback we are looking for someone who is not only passionate but also in love with our Mission of “uplifting undervalued students and tech professional in their early careers.”

This position is ideal for someone with a passion for data science and education, who is entrepreneurial and wants to join a fast-growing startup that's helping the next generation of data scientists!

Qualifications

  • 0-3 years in data science
  • Excellent communication skills, ability to understand customer needs and provide valuable recommendations
  • Strong Python and SQL skills
  • Able to effectively synthesize, visualize, and communicate your ideas to others
  • Familiar with key data engineering concepts
  • Experience with data visualization

Benefits and perks

  • Great health, dental and vision benefits
  • Free daily catered lunches and snacks
  • Commuting costs covered
  • Flexible PTO
  • Ability to grow in your career and make a difference to individuals and society

We do not discriminate on the basis of race, religion, sex, gender identity, sexual orientation, age, disability, national origin, veteran status or any other basis covered by law. If you need assistance or an accommodation due to a disability, please let us know.

Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Don't see your role here?
data science machine learning computer vision healthcare Feb 03
Don't quite see the role you're looking for? Labelbox is growing incredibly fast and we are posting new roles frequently. Send us your resume so we can keep you in the loop as we grow.


About Labelbox

Labelbox is at the heart of the AI-powered computer vision revolution. Almost every decision a human makes is visual and these decisions power every industry, from healthcare to agriculture. With AI, computers can now see like humans and can make decisions in the same way. With this newfound capability, our society will build self-driving cars, accessible healthcare, automated farms that can support our global population, and much more.

The bottleneck to achieving these things with AI is the training data sets. We are building Labelbox to solve this bottleneck for data science and machine learning teams.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Gradient Ventures (Google’s AI-focused venture fund), Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:
Machine Learning Platform Engineer
Tesorio  
machine learning data science finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Machine Learning Platform Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo. Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes

Required Skills

  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment

Nice to Haves

  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure
Share this job:
Data Engineer
Tesorio  
python data science machine learning finance Jan 30
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Engineer to work on our Data Science team.

Company Overview
Tesorio is a high-growth, early-stage startup that has just closed a 10MM round with Madrona Venture Group. We're backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights). 

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.

What’s in it for you?

  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.

Responsibilities

  • Extract data from 3rd party databases and transform into useable outputs for the Product and Data Science teams
  • Work with Software Engineers and Machine Learning Engineers, call out risks, performance bottlenecks
  • Ensure data pipelines are robust, fast, secure and scalable
  • Use the right tool for the job to make data available, whether that is on the database or in code
  • Own data quality and pipeline uptime. Plan for failure

Required Skills

  • Experience scaling, securing, snapshotting, optimizing schemas and performance tuning relational and document data stores
  • Experience building ETL pipelines using workflow management tools like Argo, Airflow or Kubeflow on Kubernetes
  • Experience implementing data layer APIs using ORMs such as SQLAlchemy and schema change management using tools like Alembic
  • Fluent in Python and experience containerizing their code for deployment.
  • Experience following security best practices like encryption at rest and flight, data governance and cataloging
  • Understanding the importance of picking the right data store for the job. (columnar, logging, OLAP, OLTP etc.,)

Nice to Have Skills

  • Exposure to machine learning
  • Experience with on-prem deployments
Share this job:
Software Engineer in Test
testing cypress automated-tests circleci javascript html Jan 27

Our homes are our most valuable asset and also the most difficult to buy and sell. Knock is on a mission to make trading in your house as simple and certain as trading in your car. Started by founding team members of Trulia.com (NYSE: TRLA, acquired by Zillow for $3.5B), Knock is an online home trade-in platform that uses data science to price homes accurately, technology to sell them quickly and a dedicated team of professionals to guide you every step of the way. We share the same top-tier investors as iconic brands like Netflix, Tivo, Match, HomeAway and Houzz.


We are seeking a passionate Software Engineer in Test to help us build our QA & automation processes, procedures, and tools. You will be responsible for integration and regression testing our frontend, mobile, and backend applications, and will be an advocate for a modern DevOps-first automation-rich development and release pipeline. We are looking for someone who is passionate about creating great products and making the world amazing for homebuyers.


At Knock, we have fun, we move fast, we support and celebrate our fellow teammates, and we live by our POPSICLE values.

As a Software Engineer in Test you will: 

  • Lead and create robust test documentation including test plans, test cases, and test result analysis.
  • Review functional and design specifications to ensure full understanding of deliverables.
  • Build, run and maintain automated functional, integration and regression tests to help improve software quality.
  • Build and maintain tooling to facilitate testing in the CI/CD pipelines.
  • Design metrics for performance, reliability, stability and compatibility with other systems.
  • Work deeply with our in-house and field operations team to identify, document, and regression test issues as they occur in the wild
  • Collaborate closely and daily with the design, product, engineering teams and other key teams at Knock.

We’re looking for Knockstars who have: 

  • Must be U.S. based.
  • B.S. in Computer Science or equivalent experience.
  • Minimum of 5 years of experience as a software quality assurance engineer.
  • Experience in developing test strategies, test plans, test cases, and analyzing test results.
  • Experience in building automated functional, integration and regression tests.
  • Experience with testing automation frameworks.
  • Experience in building automated UI testing for both web and mobile.
  • Proven ability to translate functional requirements and use cases into working test plans and test cases.
  • A strong customer-first mindset and data-driven approach to their work
  • Programming proficiency in HTML, JavaScript, and other scripted or interpreted languages.
  • Knowledge of SQL (MySQL or Postgres).
  • Proven success working remotely in prior positions & are experienced working with a distributed, national team 

Bonus points for:

  • Team and/or technical leadership experience.
  • Development and test experience in Node.js and React Native.
  • Experience with native Android and iOS automated test frameworks.
  • Experience with Docker-based ecosystems and container orchestration systems such as Amazon ECS or Kubernetes.

What We Can Offer You:

  • An amazing opportunity to be an integral part of building the next multi-billion dollar consumer brand around the single largest purchase of our lives.
  • Talented, passionate and mission-driven peers disrupting the status quo.
  • Competitive cash, full medical, dental, vision benefits, 401k, flexible work schedule, unlimited vacation (2 weeks mandatory) and sick time.
  • Flexibility to live and work anywhere within the United States. As we are a distributed company and engineering team, we are open to any U.S. location for this role.

We have offices in New York, San Francisco, Atlanta, Charlotte, Raleigh, Dallas-Fort Worth, Phoenix, and Denver with more on the way. In fact, we are proud to be a distributed company with employees in 21 different states. This is an amazing opportunity to be an integral part of building a multi-billion dollar consumer brand in an industry that is long overdue for a new way of doing things. You will be working with a passionate, mission-driven team that is disrupting the status quo.


Knock is an Equal Opportunity Employer.


Please no recruitment firm or agency inquiries, you will not receive a reply from us.

Share this job:
Senior Data Scientist
python aws tensorflow pytorch scikit-learn senior Jan 17

XOi Technologies is changing the way field service companies capture data, create efficiencies, collaborate with their technicians, and drive additional revenue through the use of the XOi Vision platform. Our cloud-based mobile application is powered by a robust set of machine learning capabilities to drive behaviors and create a seamless experience for our users.

We are a group of talented and passionate engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and solutions to the challenges they face in their workplace. Problems and solutions typically center around aspects of the Vision platform such as image recognition, natural language processing, and content recommendation.

As a Senior Data Scientist, you will build machine learning products to help automate workflows and provide valuable assistance to our customers. You’ll have access to the right tools for the job, large amounts of quality data, and support from leadership that understands the full data science lifecycle. You’ll build models using technologies such as Python, Tensorflow, and Docker.

Responsibilities:

  • Interpret and understand business needs/market opportunities, and translate those into production analytics.
  • Select appropriate technologies and algorithms for given use cases.
  • Work directly with product managers and engineering teams to tightly integrate new analytic capabilities.
  • Prepare reports, visualizations, and other documentation on the status, operation and maintenance of the analytics you create.
  • Stay current on relevant machine learning and data science practices, and apply those to existing problem sets.

Requirements: 

  • Excellent understanding of machine learning algorithms, processes, tools, and platforms including: CNN, RNN, NLP, Tensorflow, PyTorch, etc.
  • Proficient with the following (or comparable): Linux, Python, scikit-learn, NumPy, pandas, spaCy.
  • Applied experience with machine learning on large datasets/sparse data with structured and unstructured data.
  • Experience with deep learning techniques and their optimizations for efficient implementation.
  • Great communication skills, ability to explain predictive analytics to non-technical audiences
  • Bachelor’s in Math, Engineering, or Computer Science (or technical degree with commensurate industry experience).
  • 3+ years of relevant work experience in data science/machine learning.

Nice to Have:

  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • DevOps experience with continuous integration/continuous deployment.
  • Experience in software engineering best practices, principles, and code design concepts.
  • Speech-to-text or OCR expertise.

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Senior Data Scientist / Backend Engineer
komoot  
aws data-science machine-learning kotlin python backend Jan 16

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and, with more than 8.5 million users and 50,000 five-star reviews - komoot is on its way to become one of the most popular cycling and hiking apps. Join our fully remote team of 60+ people and change the way people explore!


To help us continue to grow, we are looking for an experienced data scientist dedicated to coding and building production-ready services. With over 8 million active users, komoot possesses a unique dataset of user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback. Using this data as well as various open data sources, you will drive product enhancements forward that will directly impact the user experience.

We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success.

What you will do

  • Work closely with our web and mobile developers, designers, copywriters and product managers
  • Discuss product improvements, technical possibilities and road maps
  • Investigate and evaluate data science approaches for product enhancements
  • Write code that is well structured, well tested and documented
  • Enhance existing components and APIs as well as write new services from scratch
  • Deploy and monitor your code in our AWS Cloud (you can count on the support of experienced backend engineers)

Why you will love it

  • You will be challenged in a wide range of data science tasks
  • You deal with a diverse set of data (user-generated content, analytics data and external data sources)
  • You go beyond prototyping and ship your code to production
  • You contribute to a product with a vision to inspire more people to go outdoors
  • You’ll work in a fast-paced startup with strongly motivated and talented co-workers
  • You’ll enjoy the freedom to organize yourself the way you want
  • We let you work from wherever you want, be it a beach, the mountains, your house, co - working of your choice, our HQ in Berlin/ Potsdam or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides

You will be successful in this position if you

  • Have a passion for finding pragmatic and smart solutions to complex problems
  • Have 3+ years of industry experience in data science
  • Have 2+ years of experience in professional programming, preferable in Python or Java
  • Experience in technologies like Pandas, Numpy, Jupyter Notebooks, Seaborn, Scikit-Learn, PyTorch and TensorFlow
  • Know your toolkit: git, ssh, bash and docker.
  • Experience in AWS, infrastructure as code and monitoring is a plus
  • Have strong communication and team skills
  • Have a hands-on attitude and are highly self-driven

Sounds like you?

Then send us the following:

  • Your CV in English highlighting your most relevant experience
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub Repositories, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, a blog or something else
Share this job:
Software Engineer
python-3.x flask microservices data science machine learning saas Jan 14

Carbon Relay is a world-class team of software engineers, data scientists and devops experts focused on harnessing the power of machine learning to help organizations achieve the most with their Kubernetes-based applications. With our innovative optimization platform, we help boost application performance while keeping costs down.

We’re looking for a Software Engineer to work on the next generation of K8s optimization products that bridge the gap between data science, engineering and DevOps. You’ll be working closely with our engineering and data science teams, helping bring products from R&D into production and making our products scale efficiently. 

Responsibilities

  • Design and implement features as part of SaaS-based microservices platform
  • Contribute to and enhance internal APIs and infrastructure
  • Work alongside our data science team to integrate machine learning into our products

Required qualifications

  • 1-3 years of software engineering experience
  • Experience with Python
  • Experience shipping and maintaining software products
  • Experience working with Git and GitHub

Preferred qualifications

  • Familiarity with Kubernetes and Containerization 
  • Experience with GCP/GKE
  • Experience developing SaaS applications / microservice architectures

Why join Carbon Relay:

  • Competitive salary
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!

Overview

Responsibilities

Share this job:
Data Privacy Analyst
Anonos  
project-management sql r data-science linux testing Jan 11

Updated 1/11/20

Data Privacy Analyst

Anonos is a fast-growing start-up in the data privacy software space looking for a Data Privacy Analyst,  who will report to our Chief Data Strategist. This is a remote/work from home position. If you want to be part of an exciting period in the company’s development and growth and think you meet most of the criteria below, we want to hear from you!

Please do not contact us if you are an agency or recruiter. We conduct our own in-house recruiting.

Why Anonos?

Our Co-Founders have been business partners for over 19 years and have an extremely successful track record. They previously built a company that was recognized as one of the fastest growing software companies on the Inc 500® for two years in a row and which was ultimately acquired by Nasdaq OMX. They have built a solid and cohesive team at Anonos which works efficiently, acts quickly, and values energy and focus.

We just received our 7th patent for our foundational technology (with another 60+ pending), and Gartner named us a “Cool Vendor” due to our innovation and uniqueness in the marketplace.

Privacy is one of the hottest technology segments in the market. We are launched, funded, have customers, an established partner channel, and are now ready for fast growth in 2020.

If you thrive working with bright co-workers and the latest technologies, like to contribute and be challenged at the same time, and are comfortable working remotely, we should be a great fit for you.

About Our Product and Solutions

Anonos’ patented BigPrivacy® technology enables compliant data innovation, analytics, use, sharing, combining, and re-linking by technically enforcing automated privacy and security controls in compliance with internal restrictions and external regulatory requirements.

We Value Team Members Who Are:

  • Smart: You have outstanding intellectual ability, and a proven track record of quickly learning new skills, concepts, and technologies
  • Great Communicators: You are an excellent written and verbal communicator. We are looking for someone who can interact effectively with the C-suite, the newest software engineer, and everyone in between
  • Leaders: You can rally team members around a challenge
  • Tech Savvy: You understand how information technologies work – and work together. You have experience with data use-cases including analytics, data processing, or similar fields
  • Entrepreneurial: You are excited about creating new products at a startup that is re-defining the data privacy landscape
  • Strategic and Analytical: You are excited about the prospect of constantly learning about our customers, our product and the market and thinking about the implications for our business
  • Results-Oriented: You deliver, on-time. You anticipate obstacles and adjust when unexpected challenges arise

Responsibilities

This role will initially be focused on supporting potential customer Proofs of Concept/Proofs of Value/Pilots as part of our Sales Process. Successful candidates will have at least 5 years of professional experience, a math, science or engineering degree and be able to demonstrate a combination of software technical acumen, project management, customer engagement and data analytics experience.

Sales Demonstrations / Proofs of Concept / Proofs of Value 


  • Support product demonstrations for potential customers
  • Prepare synthetic data sets (schema definition, data generation, data wrangling, QA)
  • Support translation of client uses case into BigPrivacy configurations and data process flows
  • Support 3-hour technical demonstration sessions – step by step configuration of all BigPrivacy software to meet customer use-case requirements

Pilot Projects and New Client Implementations

  • Support customer onboarding
  • Scoping and defining requirements of client IT environment vs software requirements
  • Support client install and first line troubleshooting; coordinate problem resolution with the development team when necessary
  • Support distribution partner “on-site” project managers to ensure project and technical issues are addressed
  • Provide software application training 

Other Activities

  • Develop an extensive working knowledge of all Anonos products
  • Training of sales partner project management and technical staff as needed
  • Participate with internal user-acceptance testing prior to new releases
  • Create and edit product and training documentation
  • Modify, update and improve existing demonstrations and create new ones

Desired Skills

Data Analytics/Data Engineering

  • Hands-on experience with data analytics – methods, use cases, challenges
  • Intermediate Excel (pivot tables, functions, data formatting, dates)
  • Data Wrangling, Featuring Engineering
  • Pentaho Data Integration or other ETL tools a significant plus
  • Data Science skills a significant plus (R, basic ML models, statistical concepts)


IT/Development Tools

  • Basic familiarity with several of the following: all or most a significant plus
  • Linux – desktop and basic command line, including vim (text editor), SSH
  • GitHub, ZenHub (or comparable)
  • SQL
  • Docker, Kubernetes
  • Hadoop, MapR, HDFS, Spark or other BigData tools
  • Pseudonymisation, Tokenization, Encryption

    General Business
  • Experience with Data Privacy (GDPR, CCPA, HIPPA, etc.) a significant plus
  • Experience with Regulatory/Standards compliance of any kind financial, ISO, health-care, environmental, quality, nuclear, etc. a significant plus
  • Ability to lead meetings and training webinars with customer mid-level technical, professional and management staff
  • (Technical/IT) Project Management experience
  • Ability to relate to customers and staff in a professional and courteous manner
  • Exceptional phone support and software/hardware troubleshooting skills
  • Superior verbal and written communication skills
  • Ability and desire to work 100% remote, but still highly collaborative; self-starter
  • Interest in working for early stage startup – risks, flexibility, adaptability

If this sounds like the right role and the right environment for you, we welcome your application.

Learn more about Anonos at www.anonos.com

Share this job:
Data Science Course Mentor
python sql hadoop data science machine learning Jan 08

Apply here


Data Science Course Mentor

  • Mentorship
  • Remote
  • Part time


Who We Are
At Thinkful, we believe that if schools put in even half the amount of effort that students do the outcomes would be better for everyone. People would have a path to a fulfilling future, instead of being buried under debt. Employers would benefit from a workforce trained for today. And education could finally offer students a return on their investment of both money and time. 

We put in outlandish amounts of effort to create an education that offers our students a guaranteed return on their investment. we partner with employers to create a world-class curriculum built for today. We go to ends of the earth to find mentors who are the best of the best. We invest more in career services than any of our peers. We work hard to be on the ground in the cities our students are. Simply put, no other school works as hard for its students as we do. 

The Position
Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed technologists. As a Course Mentor, you will support students by acting as an advisor, counselor, and support system as they complete the course and land their first industry job. To achieve this, you will engage with students using the below range of approaches, known as Engagement Formats. Course Mentors are expected to provide support across all formats when needed. 

  • Mentor Sessions: Meet with students 1-on-1 in online video sessions to provide technical and professional support as the student progresses through the curriculum.
  • Group Sessions: Host online video sessions on topics of your expertise (in alignment with curriculum offerings) for groups of student seeking live support between mentor sessions. 
  • Grading: Reviewing student checkpoints submissions and delivering written feedback, including analysis of projects and portfolios. 
  • Technical Coaching: Provide in-demand support to technical questions and guidance requests that come to the Technical Coaching team through text and video in a timely manner. This team also provides the TA support for immersive programs. 
  • Assessments & Mock Interviews: Conduct 1-on-1 mock interviews and assessments via video calls and provide written feedback to students based on assessment rubrics. 

In addition to working directly with students, Course Mentors are expected to maintain an environment of feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment.

Requirements
  • Minimum of 3 years professional experience as a Data Scientist or demonstrated expertise with data visualizations and machine learning at an industry level
  • Proficiency in SQL, Python
  • Professional experience with Hadoop and Spark a plus
  • Excellent written and verbal communication
  • High level of empathy and people management skills
  • Must have a reliable, high-speed Internet connection

Benefits
  • This is a part-time role (10-25 hours a week)
  • Fully remote position, with the option to work evenings and weekends in person in 22 US cities
  • Community of 500+ like-minded Educators looking to impact others and keep their skills sharp
  • Full access to all of Thinkful Courses for your continued learning
  • Grow as an Educator

Apply
If you are interested in this position please provide your resume and a cover letter explaining your interest in the role.

Thinkful can only hire candidates who are eligible to work in the United States.

We stand against any form of workplace harassment based on race, color, religion, sexual orientation, gender identity or expression, national origin, age, disability, or veteran status. Thinkful provides equal employment opportunities to all employees and applicants. If you're talented and driven, please apply.

At this time, we are unable to consider applicants from the following states: Alaska, Delaware, Idaho, New Mexico, North Dakota, South Carolina, South Dakota, West Virginia, and Wyoming

Apply here
Share this job:
Data Scientist
python sql spacy powerbi github data science Jan 07

Position Overview:

Our tech team is looking for a data scientist with excellent communication skills and demonstrated experience writing idiomatic Python code. You’re comfortable fielding a question from a non-technical stakeholder about our dataset and then putting together a data visualization with the answer. You’re also ready to troubleshoot a bug in one of our existing ETL scripts and make a pull request with a detailed write-up of the fix. We use Google BigQuery, PowerBI, spaCy, pandas, Airflow, Docker.

The right candidate has experience with the Python data science stack as well as one or more BI tools such as Tableau or PowerBI, and is able to juggle competing priorities with finesse. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry.

Key Responsibilities:

  • Query and transform data with Standard SQL and pandas
  • Build BI reports to answer questions of our data
  • Work with our data engineering team to munge large datasets using our existing data pipelines for our existing BI reports

Qualifications & Skills:

REQUIRED:

  • 1-3 years of experience working full-time with Python for data science; we use pandas, scikit-learn, and numpy
  • Intermediate-to-expert level SQL experience; we use Standard SQL
  • Experience with one or more natural language processing frameworks; we use spaCy.
  • Excellent communication skills and demonstrated ability to collaborate with non-technical stakeholders to create compelling answers to tough data questions
  • Intermediate-to-expert level skills with one or more interactive business intelligence tools like PowerBI or Tableau

PREFERRED:

  • Experience with CI/CD tools like CircleCI; we use GitHub Actions
  • Experience with Docker
  • Experience with Airflow

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Data Engineer
python sql google-bigquery pandas airflow data science Jan 06

Position Overview:

The ideal candidate is an experienced data engineer. You will help us develop and maintain our data pipelines, built with Python, Standard SQL, pandas, and Airflow within Google Cloud Platform. We are in a transitional phase of refactoring our legacy Python data transformation scripts into iterable Airflow DAGs and developing CI/CD processes around these data transformations. If that sounds exciting to you, you’ll love this job. You will be expected to build scalable data ingress and egress pipelines across data storage products, deploy new ETL pipelines and diagnose, troubleshoot and improve existing data architecture. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry. 

Key Responsibilities:

  • Build and maintain ETL processes with our stack: Airflow, Standard SQL, pandas, spaCy, and Google Cloud. 
  • Write efficient, scalable code to munge, clean, and derive intelligence from our dataPage Break

Qualifications & Skills: 

REQUIRED:

  • 1-3 years experience in a data-oriented Python role, including use of:
    • Google Cloud Platform (GCE, GBQ, Cloud Composer, GKE)
    • Airflow
    • CI/CD like: GitHub Actions or CircleCI 
    • Docker
  • Fluency in the core tenants of the Python data science stack: SQL, pandas, scikit-learn, etc. 
  • Familiarity with modern NLP systems and processes, ideally spaCy

PREFERRED:

  • Demonstrated ability to collaborate effectively with non-technical stakeholders
  • Experience scaling data processes with Kubernetes 
  • Experience with survey and/or social media data
  • Experience preparing data for one or more interactive data visualization tools like PowerBI or Tableau

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Senior Fullstack Software Engineer
senior javascript data science machine learning frontend testing Jan 06
About Labelbox

Labelbox is building software infrastructure for industrial data science teams to do data labeling for the training of neural networks. When we build software, we take for granted the existence of collaborative tools to write and debug code. The machine learning workflow has no standard tooling for labeling data, storing it, debugging models and then continually improving model accuracy. Enter Labelbox. Labelbox's vision is to become the default software for data scientists to manage data and train neural networks in the same way that GitHub or text editors are defaults for software engineers.

Current Labelbox customers include American Family Insurance, Lytx, Airbus, Genius Sports, Keeptruckin and more. Labelbox is venture backed by Google, Kleiner Perkins and First Round Capital and has been featured in Tech Crunch, Web Summit and Forbes.

Responsibilities

  • Strong understanding of Javascript with an interest in using Typescript
  • Experience managing/scaling SQL databases, orchestrating migrations, and disaster recovery
  • Experience working with Redux and architecting large single page applications
  • Experience and interest in frontend testing
  • Optimizing data models and database configurations for both ease-of-use and performant response times
  • Building new features and resolvers in our GraphQL API with Node.JS

Follow-on Responsibilities

  • Experience with SQL databases
  • Experience optimizing web traffic
  • Experience with RabbitMQ (or other message broker) and Redis
  • Experience constructing and monitoring ETL pipelines
  • Experience with Logstash / Elasticsearch
  • Familiarity with Kubernetes and Docker

Requirements

  • 4+ years of experience building data rich frontend web applications
  • A bachelor’s degree (or equivalent) in computer science or a related field.
We believe that AI has the power to transform every aspect of our lives -- from healthcare to agriculture. The exponential impact of artificial intelligence will mean mammograms can happen quickly and cheaply irrespective of the limited number of radiologists there are in the world and growers will know the instant that disease hits their farm without even being there.

At Labelbox, we’re building a platform to accelerate the development of this future. Rather than requiring companies to create their own expensive and incomplete homegrown tools, we’ve created a training data platform that acts as a central hub for humans to interface with AI. When humans have better ways to input and manage data, machines have better ways to learn.

Perks & Benefits:
Medical, Dental & Vision coverage
Flexible vacation policy
Dog friendly office
Daily catered lunch & snacks
Great office location in the Mission district, beautiful office & private outdoor patio with grill
Share this job:
R Engineer
r cpp rcpp c data science cloud Jan 04

Description

We are looking for an R developer to build and maintain our R interface to the TileDB array storage engine and hosted cloud service. R is a very popular programming language used by numerous developers in the Bio and Finance communities, among many others. The TileDB core library is built in C++ for supreme performance, and we built an R API so that it can be used by the R community. We are looking for a person to improve our R API and expand it with computational capabilities (e.g., integration with dplyr) and domain specific software (e.g., Bioconductor).

As an R Engineer, you will be responsible for

  • Leading the development of TileDB-R (TileDB R API)
  • Building out features to better integrate TileDB-R with commonly used R data science libraries
  • Troubleshooting and fixing bugs reported by users
  • Building and developing use cases around using TileDB in the R ecosystem

Location

Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

Expectations

In your first 30 days, you will familiarize yourself with the core TileDB storage engine and the existing TileDB-R API. For your next 30 days, you will start contributing to TileDB-R, adding missing core TileDB functions and improving the performance of the existing ones. After 60 days, you will be fully integrated in our team. You will start researching R use cases and exploring further integrations with popular R packages.

Requirements

  • Experience developing and maintaining R libraries
  • Experience using a low-level R API for a C library
  • Experience using Rcpp / C++ for R extensions
  • Familiarity with S3 / S4 OO frameworks
  • Familiarity with R packaging, distribution with CRAN
  • Experience extending / building upon data.frame / data.table API’s
  • Domain knowledge in using R within the fields of finance or bioinformatics

Benefits

  • Competitive salary and stock options
  • 100% medical and dental insurance coverage (for you and your dependents!)
  • Paid parental leave
  • Paid time off (vacation, sick & public holidays)
  • Flexible time off & flexible hours
  • Flexibility to work remotely (anywhere in the US or Greece)

TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

Share this job:
Data Engineer: AI/ML
pytorch python machine-learning fast-ai pipeline ruby Dec 26 2019

Roadtrippers Place Lab powers the geo-data for Roadtrippers consumer web and mobile applications and the underlying B2B services.  Roadtrippers Place Lab is looking for a detail-oriented problem solver to join the team as a Data Engineer focusing on all things geo-data. This engineer will share the responsibility of data quality and fidelity with our engineering, data science, and data quality teams by developing better ways to evaluate, audit, augment, and ingest data about places.

Responsibilities

  • Work with the AI/ML research team in developing new models and pipelines to derive insights and improve our data quality
  • Bridge AI/ML research to assist in building production pipelines and improve the efficiency transitioning from development
  • Own production AI/ML pipelines including revisions, optimizations and detecting root-cause anomalies
  • Assist in planning and implementation of data ingestion, sourcing, and automation projects
  • Communicate with Engineering and Product teams about requirements and opportunities as it relates to new data and schema updates
  • Contribute to application development for data initiatives 
  • Identify, participate and implement initiatives for continuous improvement of data ingestion, quality, and processes.
  • Manually manipulate data when necessary, while learning and applying these needs to scale future projects

Qualifications

  • Experience with Data Science/ML/AI
  • Experience working with geospatial data is a huge plus
  • Development experience with Python
  • Knowledge of SQL (ideally Postgres), Elasticsearch and schemaless databases
  • Experience with ETL and implementing Data Pipeline architecture 
  • AWS and SageMaker experience is particularly valuable 
  • Big data experience is ideal 
  • Understanding of web application architecture, Ruby and Ruby on Rails experience is a plus
  • A "do what it takes" attitude and a passion for great user experience
  • Strong communication skills and experience working with highly technical teams
  • Passion for identifying and solving problems
  • Comfort in a fast-paced, highly-dynamic environment with multiple stakeholders

We strongly believe in the value of growing a diverse team and encourage people of all backgrounds, genders, ethnicities, abilities, and sexual orientations to apply.

Share this job:
Senior Backend Engineer - Content and Metadata
Scribd  
backend senior cs data science Dec 25 2019
Scribd
/skribbed/ (n).
1. a tech company changing the way the world reads
2. a membership that gives users access to the world’s largest online library of books, audiobooks, sheet music, news, and magazines

We value trying new things, craftsmanship, being an open book, and the people that make our team great.
Join us and build something meaningful.

Our team
The Content Engineering team is broadly responsible for catalog management and content metadata at Scribd. Supplying supplementary data to ebook and audiobook pages? That's us. Ensuring that all user-uploaded documents are useful, accessible, and legally available? That's us. Creating pipelines that build clean and well-structured data for Search, Recommendations, and Data Science to build amazing features from? That's us. Analyzing user reading activity and translating them into publisher payouts? That's us. We're a spoke within Scribd, connecting many engineering, product, and publisher-focused teams through data.

The majority of the team is based in San Francisco but there's a strong and growing remote contingent as well (much like Scribd overall). We use tools that emphasize asynchronous communication (Slack, Gitlab, Jira, Google Docs) and are ready and able to jump on a video call when text doesn't cut it. Regardless of the medium, solid communication skills are a must. We operate with autonomy (developers closest to the code will make the most well-informed decisions) while holding ourselves and each other accountable for using good judgement when faced with each day's unique challenges.

Our technical work is divided between our user-facing Rails application and our offline data warehouse (where much of our processing is done on top of Spark). Many of the systems we're responsible for - document spam detection, document copyright detection, topic extraction and classification, sitemap generation, and translating user activity into publisher payouts, just to name a few - span both environments, so engineers regularly work within both. Though the tech stacks differ between environments, the engineering work in both is the same - create data pipelines to ingest, process, clean, and layout the metadata coming from publishers and other external sources, as well as create new metadata from our vast content base.

The role
As a Senior Backend Engineer, you've probably seen quite a bit in your career, and we want to leverage all of it. Software development will be your primary function, but we'll expect you to contribute in a number of ways, including advising on technical design, reviewing code, participating in interviews, and mentoring less experienced engineers

When you are doing software development, you'll be doing more than just coding a ticket handed to you. You'll own the implementation, delivery, and operation of systems, end-to-end. You'll consider testability, upgradeability, scale, and observability throughout the development process. You'll regularly have one or two engineers following your lead, whose output you will be responsible for. On Content Engineering, a Senior Backend Engineer is a leader.

If you've been a senior engineer for a while and have been more focused on architectural concerns, cross-team initiatives, and other strategic endeavors, we have a place for you as well. Just know that this is a code-heavy role

Office or remote?
We have a wonderful new office in San Francisco, as well as smaller offices in Toronto and New York. If you live close to one of those you'll find great people and a nice work environment.

If you don't live near one of those offices, we'd still love to have you! Scribd is expanding its remote workforce with the goal of finding the best employees regardless of location. Being a remote employee means providing your own productive work environment. Being a remote employee means providing your own productive work environment, and everything else is pretty similar to being an office employee. We expect remote employees to have solid communication skills, good judgement, and demonstrable personal responsibility. We also expect the same from our in-office employees, so you'll be in good company.

Nitpicky requirements
Backend Engineers on Content Engineering typically have:
• 8+ years of experience as a professional software engineer
• Experience or a strong interest in backend systems and data pipelines
• Experience working with systems at Scribd's current scale
• Bachelor’s in CS or equivalent professional experience

We present these in order to detail the picture of what we're looking for. Of course, every engineer brings something unique to the table, and we like nothing more than finding a diamond in the rough.

Required Questions
• What’s your favorite book that you’ve read recently?
• In one sentence, why does this role appeal to you?
Why we work here
• We are located in downtown San Francisco, within walking distance of Caltrain and BART
• Health benefits: 100% employer covered Medical/Dental/Vision for regular, full-time employees
• Generous PTO policy plus we close for the last week in December
• 401k matching
• Paid Parental leave
• Monthly wellness budget and fully paid membership to our onsite fitness facility
• Professional development: generous annual budget for our employees to attend conferences, classes, and other events
• Three meals a day, catered from local restaurants
• Apple laptops and any equipment you want to customize your work station
• Free Scribd membership and a yearly reading stipend!
• Company events that include monthly happy hours and offsites (past events include Santa Cruz, bowling, arcades, geocaching, ropes courses, etc.)

In the meantime, check out our office and meet some of the team at https://www.scribd.com/about

Scribd values diversity, and we make all hiring and employment decisions based on merit, qualifications, competence, talent, and contribution, not who you are by choice or circumstance. We value the people who make Scribd a great place to work and strive to create an environment where your work is supported and personhood respected.
Share this job:
Backend Engineer - Content and Metadata
Scribd  
backend cs data science Dec 25 2019
Scribd
/skribbed/ (n).
1. a tech company changing the way the world reads
2. a membership that gives users access to the world’s largest online library of books, audiobooks, sheet music, news, and magazines

We value trying new things, craftsmanship, being an open book, and the people that make our team great.
Join us and build something meaningful.

Our team
The Content Engineering team is broadly responsible for catalog management and content metadata at Scribd. Supplying supplementary data to ebook and audiobook pages? That's us. Ensuring that all user-uploaded documents are useful, accessible, and legally available? That's us. Creating pipelines that build clean and well-structured data for Search, Recommendations, and Data Science to build amazing features from? That's us. Analyzing user reading activity and translating them into publisher payouts? That's us. We're a spoke within Scribd, connecting many engineering, product, and publisher-focused teams through data.

The majority of the team is based in San Francisco but there's a strong and growing remote contingent as well (much like Scribd overall). We use tools that emphasize asynchronous communication (Slack, Gitlab, Jira, Google Docs) and are ready and able to jump on a video call when text doesn't cut it. Regardless of the medium, solid communication skills are a must. We operate with autonomy (developers closest to the code will make the most well-informed decisions) while holding ourselves and each other accountable for using good judgement when faced with each day's unique challenges.

Our technical work is divided between our user-facing Rails application and our offline data warehouse (where much of our processing is done on top of Spark). Many of the systems we're responsible for - document spam detection, document copyright detection, topic extraction and classification, sitemap generation, and translating user activity into publisher payouts, just to name a few - span both environments, so engineers regularly work within both. Though the tech stacks differ between environments, the engineering work in both is the same - create data pipelines to ingest, process, clean, and layout the metadata coming from publishers and other external sources, as well as create new metadata from our vast content base.

The role
A Backend Engineer on Content Engineering can take many forms:

You may be a relatively new college or boot camp graduate, looking for your first job where you can learn the ropes from a team of experienced professionals. You have a place here. 

You may have a few years of experience and are looking for your next challenge. You have a place here. 

You may have built out a few systems alongside senior engineers and are ready to take on ownership of feature delivery. You have a place here. 

We look for engineers that aspire to learn and grow, that thrive on constructive feedback, and know they’ll be ready to step up when the opportunity presents itself. 

Office or remote?
We have a wonderful new office in San Francisco, as well as smaller offices in Toronto and New York. If you live close to one of those you'll find great people and a nice work environment.

If you don't live near one of those offices, we'd still love to have you! Scribd is expanding its remote workforce with the goal of finding the best employees regardless of location. Being a remote employee means providing your own productive work environment. Being a remote employee means providing your own productive work environment, and everything else is pretty similar to being an office employee. We expect remote employees to have solid communication skills, good judgement, and demonstrable personal responsibility. We also expect the same from our in-office employees, so you'll be in good company.

Nitpicky requirements
Backend Engineers on Content Engineering typically have:
• 0-6+ years of experience as a professional software engineer
• Experience or a strong interest in backend systems and data pipelines
• Bachelor’s in CS or equivalent professional experience

We present these in order to detail the picture of what we're looking for. Of course, every engineer brings something unique to the table, and we like nothing more than finding a diamond in the rough.

Required Questions
• What’s your favorite book that you’ve read recently?
• In one sentence, why does this role appeal to you?
Why we work here
• We are located in downtown San Francisco, within walking distance of Caltrain and BART
• Health benefits: 100% employer covered Medical/Dental/Vision for regular, full-time employees
• Generous PTO policy plus we close for the last week in December
• 401k matching
• Paid Parental leave
• Monthly wellness budget and fully paid membership to our onsite fitness facility
• Professional development: generous annual budget for our employees to attend conferences, classes, and other events
• Three meals a day, catered from local restaurants
• Apple laptops and any equipment you want to customize your work station
• Free Scribd membership and a yearly reading stipend!
• Company events that include monthly happy hours and offsites (past events include Santa Cruz, bowling, arcades, geocaching, ropes courses, etc.)

In the meantime, check out our office and meet some of the team at https://www.scribd.com/about

Scribd values diversity, and we make all hiring and employment decisions based on merit, qualifications, competence, talent, and contribution, not who you are by choice or circumstance. We value the people who make Scribd a great place to work and strive to create an environment where your work is supported and personhood respected.
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job: