Remote Natural Language Processing Jobs

Last Week

Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:

This Month

Software Engineer, Backend
Fathom  
backend machine learning nlp testing healthcare Feb 12
We’re on a mission to understand and structure the world’s medical data, starting by making sense of the terabytes of clinician notes contained within the electronic health records of the world’s largest health systems.

We’re seeking exceptional Backend Engineers to work on data products that drive the core of our business--a backend expert able to unify data, and build systems that scale from both an operational and an organizational perspective.

Please note, this position has a minimum requirement of 3+ years of experience.  For earlier career candidates, we encourage you to apply to our SF and/or Toronto locations

As a Backend Engineer you will:

  • Develop data infrastructure to ingest, sanitize and normalize a broad range of medical data, such as electronics health records, journals, established medical ontologies, crowd-sourced labelling and other human inputs
  • Build performant and expressive interfaces to the data
  • Build infrastructure to help us not only scale up data ingest, but large-scale cloud-based machine learning

We’re looking for teammates who bring:

  • 3+ years of development experience in a company/production setting
  • Experience building data pipelines from disparate sources
  • Hands-on experience building and scaling up compute clusters
  • Excitement about learning how to build and support machine learning pipelines that scale not just computationally, but in ways that are flexible, iterative, and geared for collaboration
  • A solid understanding of databases and large-scale data processing frameworks like Hadoop or Spark.  You’ve not only worked with a variety of technologies, but know how to pick the right tool for the job
  • A unique combination of creative and analytic skills capable of designing a system capable of pulling together, training, and testing dozens of data sources under a unified ontology

Bonus points if you have experience with:

  • Developing systems to do or support machine learning, including experience working with NLP toolkits like Stanford CoreNLP, OpenNLP, and/or Python’s NLTK
  • Expertise with wrangling healthcare data and/or HIPAA
  • Experience with managing large-scale data labelling and acquisition, through tools such as through Amazon Turk or DeepDive

Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:

This Year

Senior Data Scientist
python aws tensorflow pytorch scikit-learn senior Jan 17

XOi Technologies is changing the way field service companies capture data, create efficiencies, collaborate with their technicians, and drive additional revenue through the use of the XOi Vision platform. Our cloud-based mobile application is powered by a robust set of machine learning capabilities to drive behaviors and create a seamless experience for our users.

We are a group of talented and passionate engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and solutions to the challenges they face in their workplace. Problems and solutions typically center around aspects of the Vision platform such as image recognition, natural language processing, and content recommendation.

As a Senior Data Scientist, you will build machine learning products to help automate workflows and provide valuable assistance to our customers. You’ll have access to the right tools for the job, large amounts of quality data, and support from leadership that understands the full data science lifecycle. You’ll build models using technologies such as Python, Tensorflow, and Docker.

Responsibilities:

  • Interpret and understand business needs/market opportunities, and translate those into production analytics.
  • Select appropriate technologies and algorithms for given use cases.
  • Work directly with product managers and engineering teams to tightly integrate new analytic capabilities.
  • Prepare reports, visualizations, and other documentation on the status, operation and maintenance of the analytics you create.
  • Stay current on relevant machine learning and data science practices, and apply those to existing problem sets.

Requirements: 

  • Excellent understanding of machine learning algorithms, processes, tools, and platforms including: CNN, RNN, NLP, Tensorflow, PyTorch, etc.
  • Proficient with the following (or comparable): Linux, Python, scikit-learn, NumPy, pandas, spaCy.
  • Applied experience with machine learning on large datasets/sparse data with structured and unstructured data.
  • Experience with deep learning techniques and their optimizations for efficient implementation.
  • Great communication skills, ability to explain predictive analytics to non-technical audiences
  • Bachelor’s in Math, Engineering, or Computer Science (or technical degree with commensurate industry experience).
  • 3+ years of relevant work experience in data science/machine learning.

Nice to Have:

  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • DevOps experience with continuous integration/continuous deployment.
  • Experience in software engineering best practices, principles, and code design concepts.
  • Speech-to-text or OCR expertise.

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Data Scientist
python sql spacy powerbi github data science Jan 07

Position Overview:

Our tech team is looking for a data scientist with excellent communication skills and demonstrated experience writing idiomatic Python code. You’re comfortable fielding a question from a non-technical stakeholder about our dataset and then putting together a data visualization with the answer. You’re also ready to troubleshoot a bug in one of our existing ETL scripts and make a pull request with a detailed write-up of the fix. We use Google BigQuery, PowerBI, spaCy, pandas, Airflow, Docker.

The right candidate has experience with the Python data science stack as well as one or more BI tools such as Tableau or PowerBI, and is able to juggle competing priorities with finesse. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry.

Key Responsibilities:

  • Query and transform data with Standard SQL and pandas
  • Build BI reports to answer questions of our data
  • Work with our data engineering team to munge large datasets using our existing data pipelines for our existing BI reports

Qualifications & Skills:

REQUIRED:

  • 1-3 years of experience working full-time with Python for data science; we use pandas, scikit-learn, and numpy
  • Intermediate-to-expert level SQL experience; we use Standard SQL
  • Experience with one or more natural language processing frameworks; we use spaCy.
  • Excellent communication skills and demonstrated ability to collaborate with non-technical stakeholders to create compelling answers to tough data questions
  • Intermediate-to-expert level skills with one or more interactive business intelligence tools like PowerBI or Tableau

PREFERRED:

  • Experience with CI/CD tools like CircleCI; we use GitHub Actions
  • Experience with Docker
  • Experience with Airflow

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Data Engineer
python sql google-bigquery pandas airflow data science Jan 06

Position Overview:

The ideal candidate is an experienced data engineer. You will help us develop and maintain our data pipelines, built with Python, Standard SQL, pandas, and Airflow within Google Cloud Platform. We are in a transitional phase of refactoring our legacy Python data transformation scripts into iterable Airflow DAGs and developing CI/CD processes around these data transformations. If that sounds exciting to you, you’ll love this job. You will be expected to build scalable data ingress and egress pipelines across data storage products, deploy new ETL pipelines and diagnose, troubleshoot and improve existing data architecture. Working in a fast-paced, flexible, start-up environment; we welcome your adaptability, curiosity, passion, grit, and creativity to contribute to our cutting-edge research of this growing, fascinating industry. 

Key Responsibilities:

  • Build and maintain ETL processes with our stack: Airflow, Standard SQL, pandas, spaCy, and Google Cloud. 
  • Write efficient, scalable code to munge, clean, and derive intelligence from our dataPage Break

Qualifications & Skills: 

REQUIRED:

  • 1-3 years experience in a data-oriented Python role, including use of:
    • Google Cloud Platform (GCE, GBQ, Cloud Composer, GKE)
    • Airflow
    • CI/CD like: GitHub Actions or CircleCI 
    • Docker
  • Fluency in the core tenants of the Python data science stack: SQL, pandas, scikit-learn, etc. 
  • Familiarity with modern NLP systems and processes, ideally spaCy

PREFERRED:

  • Demonstrated ability to collaborate effectively with non-technical stakeholders
  • Experience scaling data processes with Kubernetes 
  • Experience with survey and/or social media data
  • Experience preparing data for one or more interactive data visualization tools like PowerBI or Tableau

BENEFITS:

  • Choose your own laptop
  • Health Insurance
  • 401K
Share this job:
Senior Python Engineer
python aws-lambda graphql rest aws senior Dec 15 2019

The product engineering team is responsible for the creation and quality of the XOi Vision platform. This platform serves thousands of Field Technicians across the country.  We’re looking for a Senior Python Engineer (Analytics) to play a key role in building and maintaining the backend code and services that support our mobile and web applications. 

We are a group of talented and passionate group of engineers and data scientists working together to discover and provide valuable insights for our customers. We leverage state-of-the-art machine learning techniques to provide our users with these unique insights, best practices, and assistance to the problems they face in their workplace. Problems and solutions typically center around aspects of the XOi platform such as image recognition, natural language processing, and content recommendation.

As a senior-level engineer on the analytics team, you will build applications and data pipelines to curate and organize XOi’s data. Data is our most valued asset, and in this position, you will be a key contributor to the team. You’ll build applications using technologies such as Python (AWS Lambda), Docker, GraphQL and DynamoDB.

Responsibilities:

  • Build effective, well-tested server and APIs
  • Build data pipelines and web scrapers
  • Build containerized services for machine learning models
  • Assist in gathering and implementing requirements for data applications
  • Take ownership for application components and ensure quality throughout the development process
  • Build and maintain CI / CD pipelines.
  • Create reports, dashboards and documentation on the status, operation, and maintenance of the applications you build

Requirements: 

  • Bachelor’s degree in Computer Science or equivalent field (or 6+ years of working experience).
  • 3+ years of demonstrated experience building and deploying applications or services in a cloud infrastructure environment.
  • Expertise with functional or object-oriented program design patterns with a demonstrated ability to choose between and synthesize them.
  • Experience with both statically and dynamically typed programming languages and a solid understanding of the strengths and weaknesses of both paradigms.
  • Good understanding of REST-based services and service-based architecture.
  • Experience in developing best practices, software principles, and code design concepts.
  • Experience in developing and supporting rapid iterations of software in an Agile context.

Nice to Have:

  • Experience with CI/CD development and organizational practices
  • AWS services such as Lambda, AppSync, S3, and DynamoDB
  • Experience deploying machine learning models with Tensorflow or similar deep learning frameworks
  • Experience with web-development frameworks and visualization libraries such as React and D3.js

You Are Someone Who:  

  • Has a passion for code quality and craftsmanship.
  • Views your profession as your craft and continuously pursues excellence in your work.
  • Thrives in a fast-paced, high-growth startup environment.
  • Collaborates effectively across various teams, coordinating regularly to set and manage expectations.

You’ll experience:  

  • Being a key part of a fast-growing software company where you can make a difference.
  • Comprehensive insurance plans.
  • Monthly wellness allowance.
  • Flexible paid time off & paid volunteer time.
  • Learning & development.
  • Working in the historic and centrally located Marathon Village in Nashville, TN.
  • Participating in team outings, events, and general fun! 
  • Helping to change an industry by serving the men and women that make our world turn.
Share this job:
Data Scientist
python machine learning computer vision mongodb healthcare aws Dec 12 2019
We are looking for a talented Data Scientist to join our team at Prominent Edge. We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at http://prominentedge.com for more information and apply through http://prominentedge.com/careers.

Ideal candidates are those who can find value out of data.  Such a person proactively fetches information from various sources and analyzes it for a better understanding of the problem, and may even build AI/ML tools to make insights. The ideal candidate is adept at using large datasets to find the right needle in a pile of needles and uses models to test the effectiveness of different courses of action. Candidates must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes.  A successful candidate will have experience in many (if not all) of the following technical competencies including: statistics and machine learning, coding languages, databases, and reporting technologies.

Required Skills

  • Bachelor's Degree in Computer Science, Information Systems, Engineering or other related scientific or technical discipline.
  • Proficient in data preparation, exploration, and statistical analysis
  • Proficient in a programing language such as Python, Rm Julia, or JavaScript
  • Experience with batch scripting and data processing
  • Experience with Machine Learning libraries and frameworks such as TensorFlow/Pytorch or Bayesian Analysis using SAS/R Studio.
  • Experience with databases such as Postgres, Elasticsearch, MongoDB, or Redis

Desired Skills

  • Master's degree in Computer Science or related technical discipline.
  • Experience with natural language processing, computer vision, or deep learning
  • Experience working with geospatial data
  • Experience with statistical techniques
  • Experience as either back-end or front-end/visualization developer
  • Experience with visualization and reporting technologies such as Kibana or Tableau

W2 Benefits

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.
Share this job:
Sr. Data Scientist
python machine learning design nlp Dec 07 2019
THE JOURNEY TO YOUR DREAM JOB COULD BE JUST A CLICK AWAY…
In 2012, Tuft & Needle(tn.com) revolutionized the mattress space by turning the focus to the customer with always-honest pricing, an insistence on high-quality products, and world-class customer experience. We started our journey with two software engineers and a dream and today we have grown to a team of more than 175 talented people, working each day to bring the world premium sleep products at an honest cost.

As a Data Scientist, you'll be an important part of the company's decision-making process. You will help us understand how things are related to each other, which approaches are working, and which aren't. You'll also help us maintain our data-infrastructure. This includes our reporting and data management, as well as automated statistical and machine learning tools.

Together, we are radically reshaping how we think about sleep, mattresses, and shopping - and we’re just getting started. Want to join us?

*Open to remote opportunity

RESPONSIBILITIES:

    • Write programs to automate analyses and data wrangling
    • Build machine learning models to forecast and understand customer behavior
    • Maintain and improve reporting in Looker, Metabase, and R
    • Explain analyses and discoveries with articles and presentations


REQUIREMENTS:

    • Strong knowledge of statistics and inference
    • 2+ years writing and maintaining code
    • 2+ years working with SQL
    • Experience communicating statistical concepts to a broad audience


PREFERRED EXPERIENCE:

    • Programming in R and/or Python
    • Managing and organizing a large codebase
    • Experience with Bayesian Methods
    • Deep experience in some part of statistics (Ex: time series analysis, experimental design, multivariate analysis, natural language processing, etc.)
    • Interest in functional style programming
    • Interest in causal inference


YOU CAN SLEEP BETTER WHEN YOU WORK AT T&N
Our people – You will be working alongside some of the most talented, supportive, savvy individuals out there… people we are so proud to work with.  Together, we are shaking things up in the mattress industry and delivering an experience for clients that they would never expect.

Our product – Each team member receives a great bundle of products for themselves.  You will too if you join the team!  Your friends and family will also have access to a great product discount.

Our benefits - We offer comprehensive health benefits for you, eligible partners and dependents, paid maternity & paternity leave, 401k with a match, a generous vacation plan, and so much more. 

Tuft & Needle is proud to be an equal opportunity employer. We will not discriminate against any applicant or employee on the basis of age, race, color, creed, religion, sex, sexual orientation, gender, gender identity or expression, medical condition, national origin, ancestry, citizenship, marital status or civil partnership/union status, physical or mental disability, pregnancy, childbirth, genetic information, military and veteran status, or any other basis prohibited by applicable federal, state or local law.

Your experience is important to us. If you have any questions with your application, please contact our Candidate Experience Team at talent@tuftandneedle.com
Share this job:
Senior NodeJS/React Developer
node-js javascript senior html css machine learning Dec 07 2019

*This position can be remote, but US based candidates only.

Dealer Inspire, a CARS Inc. company, is hiring for our Conversations Team!

Conversations is Dealer Inpire's messaging platform that connects today’s car shoppers with dealerships wherever, whenever, and however they want to shop. Fast, mobile, and fully integrated with text messaging and Facebook Messenger™ Conversations  uses A.I. technology and managed chat support to instantly respond to all incoming chats 24/7.

Essential Duties & Responsibilities (including, but not limited to):

  • Development of new features, including adding functionality to our AI chat bot, Ana.
  • Writing high quality, clean code that is paired with automated unit and integration tests.
  • Taking new features through the entire development lifecycle, working in conjunction with our product owner to define the feature, develop it, and test it.
  • Refactoring non-ideal portions of both our Node API and our React apps.
  • Mentoring developers in your area of expertise.

Required Skills & Experience:

  • 3+ years of professional experience working with NodeJS; including the Express framework.
  • 2+ years of professional experience with front-end technologies; including React, Redux, Webpack.
  • Mastery of JavaScript, HTML, and CSS/SASS/StyledComponents.
  • 5+ years of professional experience working with SQL databases; the ability to write efficient queries and benchmark/profile them.
  • Strong understanding of asynchronous programming.
  • Experience with performance debugging and benchmarking.
  • Experience with testing frameworks, such as karma, mocha, or jest.
  • Experience with Git version control.
  • Understanding of CI/CD.
  • Strong attention to design detail (UI/UX).
  • Strong verbal & written communication skills.
  • Strong documentation skills.
  • Experience working remotely & as part of a distributed engineering team.

Highly Desired:

  • AWS Cloud Architecture
  • Typescript
  • Understanding of NLP and Machine Learning • Mobile-first, responsive web design
  • MySQL
  • Algolia
  • Some experience with PHP

About Dealer Inspire: 

Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.

DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today.

Want to learn more about who we are? Check us out here!

Perks:

  • Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), and Eye Med Vision
  • 401k plan with company match
  • Tuition Reimbursement
  • 13 days paid time off, parental leave, and selected paid holidays
  • Life and Disability Insurance
  • Subsidized gym membership
  • Subsidized internet access for your home
  • Peer-to-Peer Bonus program

*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.

Share this job:
Solutions Engineer
Rasa  
docker python linux nlp machine learning backend Nov 02 2019

This role is US Remote/San Francisco based.

 ABOUT US

At Rasa, we're building the standard infrastructure for conversational AI. With over half a million downloads since launch, our open source tools are loved by developers worldwide, and Rasa runs in production everywhere from startups to Fortune 500s. Our friendly community is growing fast, with developers from all over the world learning from each other and working together to make text- and voice-based AI assistants better.

Rasa's machine learning-based dialogue tools allow developers to automate contextual conversations. What are contextual conversations? Real back-and-forth dialogue that is handled seamlessly. Taking AI assistants beyond fixed question / answer pairs creates exciting new use cases for people and business. The tip of the iceberg includes automation of sales & marketing, internal processes, and advanced customer service that integrates into existing backend systems. With Rasa, companies control their own destiny, investing in AI that they own and ship instead of relying on third parties.

Rasa has raised $14 million in total funding from Accel, Basis Set Ventures and open source founders such as Ross Mason (MuleSoft), Mitchell Hashimoto (Hashicorp) and Florian Leibert (Mesosphere). The company is headquartered in San Francisco, CA, with R&D offices in Berlin, Germany and was founded in 2016.

Rasa is an equal opportunity employer. We are still a small team and are committed to growing in an inclusive manner. We want to augment our team with talented, compassionate people irrespective of race, color, religion, national origin, sex, physical or mental disability, or age.

SUMMARY 

We are looking for enthusiastic support engineers to help our customers with the use of our product, from debugging machine learning models to debugging their docker setup. Doing this well is core to the success of the company.

ABOUT THIS ROLE

Thousands of developers worldwide build voice and chat systems with Rasa. As a solutions architect, you will be working directly with developers and product managers at companies using Rasa to build conversational assistants. You’ll support them in development, building models, testing and resolving issues. 

You will collaborate closely with Rasa’s product engineers to improve our product, including API design, docs, and usability. 

Please keep in mind that we are describing the background we imagine would best fit the role. Even if you don’t meet all the requirements, yet you are confident that you are up for the task, we absolutely want to get to know you!

ABOUT YOU 

You are excited about conversational software and letting people interact with machines through text and speech. You have experience programming in a couple of languages and a good understanding of the machine learning basics. You’re good at finding the root cause of a bug, and can find a solution or workaround when the obvious fixes haven’t worked.

You want to gain more experience with natural language processing, applied machine learning, and putting AI systems into production.

Requirements:


  • Degree in computer science or a related field, or at least 2 years experience developing software.


  • Familiarity with machine learning concepts


  • Experience teaching & communicating technical material


  • Practical experience applying machine learning


  • Experience supporting customers in a technical role


  • Comfortable with most of the following: linux, python, docker, kubernetes 

Nice to have:


  • Experience applying NLP


  • Experience shipping chatbots or voice apps

THINGS YOU WILL DO 

We’re a startup, so you’ll have to be comfortable rolling up your sleeves and doing whatever is required to support our mission. However, you can definitely expect to:


  • Help our customer's engineers build ML-based bots and assistants with Rasa


  • Help them debug their installations of our product


  • Be the voice of our customers in product discussions, using what you’ve learned from helping them succeed to make our products more usable and valuable


  • Report back when a customer encounters shortcomings in our products and discuss how to improve them with our product and applied research teams


  • Run a workshop on best practices with the Rasa stack


  • Write a blog post explaining some aspect of Rasa’s code in detail


  • Collaborate with product teams to make our open source libraries easier to use

Share this job:
Senior Software Engineer
java senior machine learning nlp saas qa Oct 25 2019

Headquartered in the Boston area, Interactions, LLC is the world’s largest independent AI company.   We operate at the intersection of customer experience and AI – two of today’s most innovative and dynamic industries.   Leading global brands in a variety of industries rely on Interaction’s conversational AI technology to communicate with their customers every day. 

At Interactions we are committed to transforming customer experience and passionate about the professional and personal development of our talented and enthusiastic team. We endeavor to create opportunities that advance the skills, interests, careers and lives of our employees.  Come join our growing team!

Position Overview:

The Sr. Software Engineer plays a key role in designing and implementing components and features of the Core platform. The Sr. Software Engineer works in close collaboration with QA, DevOps, Product Owners, Professional Services, and in some cases third party software vendors.

Essential Job Functions:


  • Contribute to the creation of a massively scalable, highly available SaaS platform.


  • Design and develop high-volume, high-performance, high-availability, concurrent Java applications using proven Java frameworks and technologies.


  • Responsible for troubleshooting and fixing issues.


  • Effectively communicate within and outside the team.


  • Bring new technical ideas, in keeping with latest industry trends, to fruition by prototyping and then incorporating it into the Platform.


  • Participate in daily Scrum activities to closely monitor work against schedules and deliverables, providing progress updates and reporting any issues or technical difficulties.

Preparation, Knowledge, Skills and Abilities:

Required:


  • Bachelor’s Degree in Computer Science or similar field.


  • Six to nine years of relevant experience including hands-on coding in Java and other JVM based languages.


  • Must have experience with highly concurrent and multi-threaded systems.


  • Minimum of three years working on mission critical, 24x7, high performance and scalable systems.


  • Minimum of three years working on JEE and asynchronous messaging based technologies and distributed systems.


  • Minimum of three years working on SQL, databases and other persistence technologies using Java.


  • Experience working with all phases of the Software Development Lifecycle – ranging from architecture and design to implementation and testing.


  • Experience in writing design documentation, coding, and writing Unit and Integration tests.


  • Prior experience with architecture and design of components and features that are part of large enterprise architectures.


  • Experience with profiling and troubleshooting large scale, concurrent and multi-threaded JVM based languages (Java).


  • Must have strong testing, debugging and problem solving skills.

Pluses:


  • Master’s Degree in Computer Science or similar field.


  • Prior experience with NLP/voice technologies.


  • Prior experience with AI and machine learning technologies.


  • Enjoy coding and solving challenging and complex technical problems.

Share this job:
Data Engineer
python aws machine learning nlp cloud devops Oct 24 2019

This position is contract to hire.

Blue Orange Digital is looking for a Machine Learning Engineer to join our awesome multi-disciplinary team. We build data analytics platforms for our clients that incorporate machine learning to solve business problems. Blue Orange Digital works across multiple industries, this role provides an exciting set of experiences across a wide range of domains.

Your primary focus will be the architecting and developing systems that include data ingestion, data processing, algorithm development, and ML model development & deployment. Major technologies involved include AWS, Python 3, Spark, Pandas, Tensorflow. The ideal candidate for this position has a mixture of experience in Machine Learning model development, Cloud Engineering, and Data Engineering.

Core Responsibilities & Skills

  • Architecting, building and maintaining modern, scalable data architectures on AWS
  • Solving problems using Machine Learning and delivering ML solutions all the way to production
  • Building resilient ETL pipelines using workflow orchestration tools such as Airflow, Prefect, Luigi
  • Data exploration, analysis, and reporting with an eye towards developing a narrative using Notebooks.
  • Demonstrable experience in one or more of the following specializations: NLP, pattern detection, anomaly detection, predictive modeling, and optimization

Qualifications

  • BA/BS degree in Computer Science or a related technical field, or equivalent practical experience.
  • Advanced experience in Python with an excellent understanding of computer science fundamentals, data structures, and algorithms
  • Experience in Amazon AWS, DevOps and Automation
  • Experience with distributed machine learning using tools like Dask, Tensorflow, Kubeflow
  • Enjoys collaborating with other engineers on architecture and sharing designs with the team 
  • Interacts with others using sound judgment, good humor, and consistent fairness in a fast-paced environment
Share this job:
Data Scientist
python aws machine learning nlp cloud devops Oct 19 2019

This role is contract to hire. 

Blue Orange Digital is looking for a Machine Learning Engineer to join our awesome multi-disciplinary team. We build data analytics platforms for our clients that incorporate machine learning to solve business problems. Blue Orange Digital works across multiple industries, this role provides an exciting set of experiences across a wide range of domains.

Your primary focus will be the architecting and developing systems that include data ingestion, data processing, algorithm development, and ML model development & deployment. Major technologies involved include AWS, Python 3, Spark, Pandas, Tensorflow. The ideal candidate for this position has a mixture of experience in Machine Learning model development, Cloud Engineering, and Data Engineering.

Core Responsibilities & Skills

  • Architecting, building and maintaining modern, scalable data architectures on AWS
  • Solving problems using Machine Learning and delivering ML solutions all the way to production
  • Building resilient ETL pipelines using workflow orchestration tools such as Airflow, Prefect, Luigi
  • Data exploration, analysis, and reporting with an eye towards developing a narrative using Notebooks.
  • Demonstrable experience in one or more of the following specializations: NLP, pattern detection, anomaly detection, predictive modeling, and optimization

Qualifications

  • BA/BS degree in Computer Science or a related technical field, or equivalent practical experience.
  • Advanced experience in Python with an excellent understanding of computer science fundamentals, data structures, and algorithms
  • Experience in Amazon AWS, DevOps and Automation
  • Experience with distributed machine learning using tools like Dask, Tensorflow, Kubeflow
  • Enjoys collaborating with other engineers on architecture and sharing designs with the team 
  • Interacts with others using sound judgment, good humor, and consistent fairness in a fast-paced environment
Share this job:
Engineering Manager, Machine Learning
python php java machine learning javascript c Oct 16 2019

Summary

The Wikimedia Foundation is growing its Machine learning efforts. This an opportunity to be part of the team that builds and maintain machine learning technologies to empower millions of users – readers, contributors, and donors – who contribute to Wikipedia and its sister projects on a daily basis. We address process inefficiencies with machine learning technologies, we design and test new technology, we produce empirical insights, and we publish and present research of the intersection of technology and culture. We are strongly committed to principles of transparency, privacy, and collaboration. We use free and open source technology and we collaborate with external researchers and our volunteer community.

We are looking for an experienced Engineering Manager to help build features that enable our communities to achieve our Vision: a world in which every single human being can freely share in the sum of all knowledge. As an Engineering Manager, you will support engineers building features, products, and services used by hundreds of millions of people around the world. This is an opportunity to do good while improving one of the best known sites in the world.

We’d like you to do these things:   

  • Partner closely with other teams and departments across the Wikimedia Foundation to define and experiment with machine learning products. These could be brand new feature offerings in Wikipedia or augmentation of existing workflows.
  • Review and advice in code changes made by team.
  • Represent team members within the organization and Wikimedia community.
  • Support and coach your team members in the development of their career paths.
  • Recruit and hire new team members.

We’d like you to have these skills:

  • Multiple years of experience in leading software engineering teams and managing complex projects.
  • Practical experience with machine learning, natural language processing or information retrieval in products that have been launched to production.
  • Excellent analytical and problem solving skills. Familiarity with statistics.
  • Significant experience working with data infrastructure and distributed systems at scale.
  • Experience with both scripting and compiled languages in a Linux/Unix server environment, some of: Python, PHP, Java, Javascript, C, Scala
  • Excellent verbal and written communication skills
  • BS in Computer Science or other relevant technical field or the equivalent in related work experience.

And it would be even more awesome if you have this:

  • Previous experience working on a large, mature, open source project
  • Experience working with a geographically distributed software engineering team
  • Experience with open source software development
  • Contributing to the Wikipedia or Wikimedia project communities

Show us your stuff! If you have any existing open source software that you or teams you have lead have  developed (these could be your own software or patches to other packages), please share the URLs for the source. Links to GitHub,GitLab, BitBucket, Presi, YouTube, Medium, etc. are especially useful.



U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible international workers' benefits are specific to their location and dependent on their employer of record

More information

WMF
Blog
Wikimedia 2030
Wikimedia Medium Term Plan
Diversity and inclusion information for Wikimedia workers, by the numbers
Wikimania 2019
Annual Report - 2017

Share this job: