Remote scala Jobs

Last Week

Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:

This Month

Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Senior Data Engineer
Acast  
senior java scala big data docker cloud Feb 10
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

We are looking for a Senior Data Engineer to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as core dataset for business critical use-cases such as payouts to our podcasters. This team’s ambition is to transform our data into insights. The products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 

In this role you will work with other engineers, product owners within a cross functional agile team.

You

  • 3+ years of experience of building robust big data ETL pipelines within Hadoop Ecosystem: Spark, Hive, Presto, etc
  • Are proficient in Java or Scala and Python
  • Experience with AWS cloud environment: EMR, Glue, Kinesis, Athena, DynamoDB, Lambda, Redshift, etc.
  • Have strong knowledge in SQL, NoSQL database design and modelling, and knowing the differences on modern big data systems and traditional data warehousing
  • DevOps and infrastructure as code experience (a plus), familiar with tools like Jenkins, Ansible, Docker, Kubernetes, Cloudformation, Terraform etc
  • Advocate agile software development practices and balance trade-offs in time, scope and quality
  • Are curious and a fast learner who can adapt quickly and enjoy a dynamic and ever-changing environment

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our engineering team is mostly located in central Stockholm, but with a remote first culture we’re able to bring on people who prefer full time remote work from Sweden, Norway, UK, France and Germany.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Solutions Architect - Pacific Northwest
java python scala big data linux cloud Feb 07
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 60 -70% travel expected.
Anywhere in Pacific NorthWest

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Software Engineer at Jack Henry & Associates, Inc.
scala fs2 http4s microservices distributed-system senior Feb 05

At Banno, we believe that the world is a better place when community banks and credit unions exist to serve their communities. Our mission is to build the technology that gives community financial institutions the tools they need to compete against the big banks. Banno is redefining the relationship between forward-thinking financial institutions and their customers.


About You

You are infinitely curious and thrive in an environment where you are constantly learning and growing. You want to be somewhere that you are trusted and set up for success.  You want to be surrounded by other great engineers that drive you to be better every day.

Although you work in a team, you are self-motivated and able to work independently. You want to own the deliverable from start to finish by working with the product manager, defining the scope and seeing the work all the way through to deployment in production. You care deeply about your work, your team, and the end user.

Banno values trust and those with a bias towards action.  We are confident you will love it here.


What you and your team are working on

As a Senior Scala Engineer, you work with your team to provide APIs and back end services for a suite of digital banking products, including native mobile and web applications. Our APIs are first-class citizens and are consumed by both our internal teams as well as teams outside of Banno.

You are keeping our services up-to-date with the newest development and deployment practices. You are responsible for maintaining our services in a microservices environment and for implementing the tools necessary for observability and monitoring of those services.

This position can be worked 100% REMOTE from any US location.


Minimum Qualifications

  • Minimum 6 years of experience with server-side programming languages in production.

Preferred Qualifications

  • Knowledge of or experience with microservice architecture.
  • Experience with functional programming languages. 
  • Experience with the Scala libraries cats, http4s, and doobie.
  • Experience with event driven architecture using Kafka.
  • Experience with Observability and Monitoring.
Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Senior Data Engineer
Medium  
senior java python scala aws frontend Jan 29
At Medium, words matter. We are building the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas; a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers.

We are looking for a Senior Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback

What Will You Do!

  • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
  • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
  • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
  • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
  • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
  • Analyze, debug and maintain critical data pipelines.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.

Who You Are!

  • You have 7+ years of software engineering experience.
  • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
  • You have outstanding coding and design skills, particularly in Java/Scala and Python.
  • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
  • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
  • You have a BS in Computer Science / Software Engineering or equivalent experience.
  • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
  • You have a strong understanding & usage of algorithms and data structures.

Nice To Have!

  • Snowflake knowledge and experience
  • Looker knowledge and experience
  • Dimensional modeling skills
At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

Interested? We'd love to hear from you.
Share this job:

This Year

Consulting Engineer
java python scala big data linux azure Jan 17
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Consulting Engineers drive customer success by helping them realize business value from the burgeoning flow of real-time data streams in their organizations. In this role you’ll interact directly with our customers to provide software, development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases.  

Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art streaming data infrastructure alongside colleagues who are widely recognized as leaders in this space.

Promoting Confluent and our amazing team to the community and wider public audience is something we invite all our employees to take part in.  This can be in the form of writing blog posts, speaking at meetups and well known industry events about use cases and best practices, or as simple as releasing code.

While Confluent is headquartered in Palo Alto, you can work remotely from any location on the East Coast of the United States as long as you are able to travel to client engagements as needed

A typical week at Confluent in this role may involve:

  • Preparing for an upcoming engagement, discussing the goals and expectations with the customer and preparing an agenda
  • Researching best practices or components required for the engagement
  • Delivering an engagement on-site, working with the customer’s architects and developers in a workshop environment
  • Producing and delivering the post-engagement report to the customer
  • Developing applications on Confluent Kafka Platform
  • Deploy, augment, upgrade Kafka clusters
  • Building tooling for another team and the wider company
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles
  • Honing your skills, building applications, or trying out new product features

Required Skills:

  • Deep experience building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field
  • Ability to travel up to 60-75% of your time to client engagements

Nice to have:

  • Experience using Amazon Web Services, Azure, and/or GCP for running high-throughput systems
  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Python, Scala, or Go
  • Experience with configuration and management tools such as Ansible, Teraform, Puppet, Chef
  • Experience writing to network-based APIs (preferably REST/JSON or XML/SOAP)
  • Knowledge of enterprise security practices and solutions, such as LDAP and/or Kerberos
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Data Scientist
r machine-learning python apache-spark cluster-analysis senior Jan 08

In the Senior Data Scientist role, you will have full ownership over the projects you tackle, contribute to solving a wide range of machine learning applications, and find opportunities where data can improve our platform and company. We are looking for an experienced and creative self-starter who executes well and can exhibit exceptional technical know-how and strong business sense to join our team. 


WHAT YOU'LL DO:

  • Mine and analyze data from company data stores to drive optimization and improvement of product development, marketing techniques and business strategies
  • Assess the effectiveness and accuracy of data sources and data gathering techniques
  • Develop and implement data cleansing and processing to evaluate and optimize data quality
  • Develop custom data models and algorithms to apply to data sets
  • Run complex SQL queries and existing automations to correlate disparate data to identify questions and pull critical information
  • Apply statistical analysis and machine learning to uncover new insights and predictive models for our clients
  • Develop company A/B testing framework and test model quality
  • Collaborate with data engineering and ETL teams to deploy models / algorithms in production environment for operations use
  • Develop processes and tools to monitor and analyze model performance and data accuracy
  • Ad-hoc analysis and present results in a clear manner
  • Create visualizations and storytelling
  • Communicate Statistical Analysis and Machine Learning Models to Executives and Clients
  • Create and manage APIs

WHO YOU ARE:

  • 3-5+ years of relevant work experience
  • Extensive knowledge of Python and R
  • Clear understanding of various analytical functions (median, rank, etc.) and how to use them on data sets
  • Expertise in mathematics, statistics, correlation, data mining and predictive analysis
  • Experience with deep statistical insights and machine learning ( Bayesian, clustering, etc.)
  • Familiarity with AWS Cloud Computing including: EC2, S3, EMR.
  • Familiarity with Geospatial Analysis/GIS
  • Other experience with programming languages such as Java, Scala and/or C#
  • Proficiency using query languages such as SQL, Hive, and Presto
  • Familiarity with BDE (Spark/pyspark, MapReduce, or Hadoop)
  • Familiarity with software development tools and platforms (Git, Linux, etc.)
  • Proven ability to drive business results with data-based insights
  • Self-initiative and an entrepreneurial mindset
  • Strong communication skills
  • Passion for data

WHAT WE OFFER:

  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • Lunch provided 2x a week 
  • Snacks, snacks, snacks!
  • Casual dress code
Share this job:
Senior Software Engineer, Data Pipeline
java scala go elasticsearch apache-spark senior Dec 31 2019

About the Opportunity

The SecurityScorecard ratings platform helps enterprises across the globe manage the cyber security posture of their vendors. Our SaaS products have created a new category of enterprise software and our culture has helped us be recognized as one of the 10 hottest SaaS startups in NY for two years in a row. Our investors include both Sequoia and Google Ventures. We are scaling quickly but are ever mindful of our people and products as we grow.

As a Senior Software Engineer on the Data Pipeline Platform team, you will help us scale, support, and build the next-generation platform for our data pipelines. The team’s mission is to empower data scientists, software engineers, data engineers, and threat intelligence engineers accelerate the ingestion of new data sources and present the data in a meaningful way to our clients.

What you will do:

Design and implement systems for ingesting, transforming, connecting, storing, and delivering data from a wide range of sources with various levels of complexity and scale.  Enable other engineers to deliver value rapidly with minimum duplication of effort. Automate the infrastructure supporting the data pipeline as code and deployments by improving CI/CD pipelines.  Monitor, troubleshoot, and improve the data platform to maintain stability and optimal performance.

Who you are:

  • Bachelor's degree or higher in a quantitative/technical field such as Computer Science, Engineering, Math
  • 6+ years of software development experience
  • Exceptional skills in at least one high-level programming language (Java, Scala, Go, Python or equivalent)
  • Strong understanding of big data technologies such as Kafka, Spark, Storm, Cassandra, Elasticsearch
  • Experience with AWS services including S3, Redshift, EMR and RDS
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions

What to Expect in Our Hiring Process:

  • Phone conversation with Talent Acquisition to learn more about your experience and career objectives
  • Technical phone interview with hiring manager
  • Video or in person interviews with 1-3 engineers
  • At home technical assessment
  • Video or in person interview with engineering leadership
Share this job:
Software Engineering Manager
scala functional-programming http4s fs2 scala-cats manager Dec 30 2019

As an Engineering Manager on a services team for the Banno Platform at Jack Henry, you’ll get the chance to make a positive impact on people’s lives. We believe that the world is a better place with community banks and credit unions. Our mission is to build the technology that gives community banks and credit unions the tools they need to compete against the big banks.

Service teams create highly scalable public APIs used by millions of customers to normalize access to multiple banking systems for use in our mobile and online banking clients. You’ll work on a team deploying and monitoring their own services. Our platform is primarily functional Scala, followed by a few services written in Haskell, Node.js and Rust.

Ideal candidates are self-motivated, technically competent servant leaders with experience building, mentoring and growing their team. The first six months will be spent as an individual contributor engineer on the team, learning the domain and building trust with team members.

We are committed to creativity, thoughtfulness, and openness. Our team is highly distributed, meaning you will work with kind, talented engineers from across the United States. Occasional travel may be required for professional development conferences or company meetings.

This is a remote position with the ability to collocate at several JHA locations nationwide if desired.

Minimum Qualifications

  • Minimum 7 years of experience with server-side programming languages.
  • Minimum 1 year of team lead, supervisory or management experience.
  • Minimum 1 year developing, maintain, and supporting public facing API in production.
  • Knowledge of or experience with microservice architecture in a production environment.

Preferred Qualifications

  • Experience with Scala or Haskell in a production environment.
  • Understanding of the functional programming paradigm.
  • Experience with the cats, fs2, http4s, and doobie libraries.
  • Experience with tools like Kafka, Kinesis, AWS Lambda, Azure Functions.
  • Experience with Kubernetes.

Essential Functions

  • Oversees the daily operation of one or more engineering teams.
  • Assists team in the development and implementation of policies, procedures and programs.
  • Mentors, coaches and assists in the career development of team members and participates in frequent one-on-ones.
  • Completes product technical design and prototyping, software development, bug verification and resolution.
  • Performs system analysis and programming activities which may require research.
  • Provides technical/engineering support for new and existing applications from code delivery until the retirement of the application.
  • Provides reasonable task and project effort estimates.
  • Ensures timely, effective, and quality delivery of software into production.
  • Develops and tests applications based on business requirements and industry best practices.
  • Creates required technical documentation.
  • Periodically troubleshoots during off hours for system failures.
  • Participates in an on-call rotation supporting team owned services.
  • Collaboratively works across teams to ensure timely delivery of high-quality products.
  • Collaboratively works with customer support team to resolve or diagnose defects.
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Data Engineer
python pyspark sql aws scala Dec 25 2019
  • Solid programming background in Python
    Experience extracting and loading data to relational databases and optimizing SQL queries
    Familiar with the Hadoop ecosystem, mainly with HDFS, the Hive and Spark: we do pyspark but Scala would also be considered
    Experience with these AWS services: Glue, Athena, Lambda, EMR
    Knowledge of orchestration tools such as Airflow, Oozie, AWS Step Functions
    Nice to have:
    Experience with Kafka and Kinesis
  • Proficiency in English and Spanish.
Share this job:
VP of Engineering - Series A Funded Data Startup
scala python machine-learning apache-spark hadoop machine learning Dec 24 2019
About you:
  • High velocity superstar.
  • You want to challenge of growing and managing remote teams
  • You love really hard engineering challenges
  • You love recruiting and managing super sharp people
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • you walk through walls 
  • you want to help build a massive company
  • you live in the United States or Canada
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco, Denver, and New York City but about 50% of the team is remote (all currently in the U.S.). We get the entire company together in the same place every month.


About the role:


  • Member of the executive team and reporting directly to the CEO.
  • Oversee all engineering and machine learning
  • Core member of the executive team 

Opportunity to be:

  • one of the first 40 people in a very fast growing company 
  • be one of the core drivers of company's success 
  • work with an amazing engineering team 
  • be on the executive team 
  • potential to take on more responsibility as company grows 
  • work with only A-Players
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Senior iOS Developer
ios swift objective-c rx-swift senior scala Dec 15 2019

Get to know us

We create open-source software that puts users in control over their online browsing experience. Our desktop and mobile products, such as Adblock Plus (ABP), Adblock Browser and Flattr, help sustain and grow a fair, open web because they give users control while providing user-friendly monetization. Our most popular product, Adblock Plus (ABP), is currently used on over 100 million devices.

Here’s the big picture

Work on ABP iOS and macOS development, focusing on lower-level tasks. You will be working on complex issues, both on mobile and on browser development. Not to brag or anything, but look at how many projects you can work on, and everything is open source:

  • ABP for Safari on iOS
  • Adblock Browser for iOS
  • ABP for Safari on macOS
  • ABPKit (framework), the backbone of our products and the foundation for our partner products

After your morning coffee, you’ll be expected to do...

  • iOS (80% focus) and some macOS development using Objective-C, Swift, RxSwift
  • Core development of libraries, backend, server-side software
  • Development of iOS and macOS apps
  • Development of new products

and the rest...

  • Consulting with partners
  • Maintaining existing products
  • Strengthening the underlying technology and backend of our mobile core products
  • Working on core content blocking functionality
  • Finding innovative solutions in a very limited content blocking environment

We trust you to work from home if you have...

  • Multiple years of iOS, Swift, and Objective-C development
  • Advanced programming experience equivalent to programming with RxSwift for significant application services
  • Knowledge of algorithms and data structures (at computer science 4-year level)
  • Debugging skills (multithreading, concurrency, memory lifetimes, parallelization)
  • Expertise with HTTP protocols, database operations (SQL/NoSQL), and functional programming (e.g. Haskell, Scala, F#, Rust, Swift, JavaScript)
  • Experience in interoperability with Swift and Objective-C
  • Ability to write accurate, concise, and complete technical documentation

You can do this job in your sleep if you also have experience in...

  • Browser development
  • Content blocking
  • Working in agile teams
  • Open source development

A little bit about the team you’ll work with

The iOS/macOS team is a globally distributed team that works on multiple projects. Depending on priorities, we decide how we want to work on each level. We have bi-weekly video meetings, but most of the communication happens over IRC, email, and our issue tracking system.

Share this job:
Remote Senior Data Engineer
Hays  
scala senior python docker aws testing Dec 08 2019
Hays Specialist Recruitment is working in partnership with Security Scorecard to manage the recruitment of this position

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply.

This position is NOT eligible for subcontractors or those that require sponsorship.

Hays is conducting an exclusive search for a Senior Data Engineer, for a Cybersecurity company based in NYC. Security Scorecard builds a very unique product that rates cybersecurity postures of corporate entities through the scored analysis of cyber threat intelligence signals for the purposes of third party management and IT risk management. They have a very modern Technology stack and work in a dynamic & agile environment.

The position is a 100% remote and you'll be responsible for the management of the Analytic pipeline using Spark, Hardoop etc. Leverage cutting-edge technologies to support new and existing services and processes, drive projects through all stages of development and improving the effective output of the engineering team by managing quality and identifying inconsistencies. Your experience should involve 5+ years with Scala or another functional language (commercial environment preferred), 3+ years with Spark and the Hadoop ecosystem (or similar frameworks), Familiarity with tools like AWS and Docker, experience working with 3rd party software and Expert skills with SQL

Remote Senior Data Engineer - Perm - New York, NY

Remote Senior Data Engineer Skills & Requirements

Responsibilities
* Manage the analytic pipeline using Spark, Hadoop, etc
* Leverage cutting-edge technologies to support new and existing and services and processes.
* Quickly and efficiently design and implement in an agile environment
* Work with other team members to implement consistent architecture
* Drive projects through all stages of development
* Actively share knowledge and responsibility with other team members and teams
* Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.

Requirements:
3+ years of experience with:
* Scala or Python, both preferred
* Distributed systems (e.g. Spark, Hadoop)
* Database systems (e.g. Postgres, MySQL)
Experience with the following is preferred:
* IP (v4/v6) allocation and addressing conventions
* DNS conventions and best practices
* Anti-abuse investigations
* Bachelor's degree (CS, CE/EE, Math, or Statistics preferred)

Why Hays?

You will be working with a professional recruiter who has intimate knowledge of the Information Technology industry and market trends . Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there's a position you really want, you're fully prepared to get it. Additionally, if the position is a consulting role, Hays offers you the opportunity to enroll in full medical, dental or vision benefits.

* Medical
* Dental
* Vision
* 401K
* Life Insurance ($20,000 benefit)

Nervous about an upcoming interview? Unsure how to write a new resume?

Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting.

Hays is an Equal Opportunity Employer.

Drug testing may be required; please contact a recruiter for more information.

Share this job:
Senior Data Engineer - Spark expertise
scala postgresql senior data science docker aws Dec 05 2019

Position Summary

The Senior Data Analytics Engineer will build meaningful analytics that inform companies of security risk.  You will be working closely with our Data Science team, implementing algorithms and managing the analytic pipeline. We have over 1 PB of data, so the ideal candidate will have experience processing and querying large amounts of data.  

This role requires senior level experience in Spark, SQL and Scala. Our interview process will include live coding using these technologies!

Responsibilities

  • Manage the analytic pipeline using Spark, Hadoop, etc 
  • Leverage cutting-edge technologies to support new and existing and services and processes.
  • Quickly and efficiently design and implement in an agile environment
  • Work with other team members to implement consistent architecture
  • Drive projects through all stages of development
  • Actively share knowledge and responsibility with other team members and teams
  • Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.  

Skills and Experience:

  • Bachelor's degree (CS, EE or Math preferred) or equivalent work experience as well as interest in a fast paced, complex environment.
  • 5+ years of experience Scala preferred in a commercial environment 
  • Expert in Spark, experience with the Hadoop ecosystem and similar frameworks
  • Expert in SQL
  • Familiarity with various tools such as AWS and Docker and an instinct for automation
  • Strong understanding of Software Architecture principles and patterns.
  • Experience working with 3rd party software and libraries, including open source
  • Experience with Postgres

Traits:

  • Quick-thinker who takes ownership and pride in their work
  • A commitment and drive for excellence and continual improvement 
  • A strong sense of adventure, excitement and enthusiasm.
  • Excellent systems analytical, problem solving and interpersonal skills

Interview Process:

  • Initial Conversation with a SecurityScorecard Talent team to learn more about your experience and career objectives
  • Technical Interview with 1- 2 data engineers. This will include live coding in SQL, Spark, Scala.
  • Coding Exercise - take home exercise
  • Final Interview: Meet 1-2 engineering leaders
Share this job:
Software Engineer - .NET Platform Developer
Percona  
dot net java python scala php big data Dec 02 2019
If you like working with the developer community for an Engagement Database and being in the front lines of integration of our product into various technology stacks, this is for you.   This is your chance to disrupt a multi-billion-dollar industry, change how the world accesses information, and reinvent the way businesses deliver amazing customer experiences. As a Software Engineer in SDK and Connector engineering team, you’ll work on the developer interface to Couchbase Server for JVM platform languages including the Java SDK, future platforms like Scala and Kotlin and contribute to connectors and frameworks such as Apache Spark and Spring Data. In your daily work, you will help the developer community to innovate on top of our Engagement Database.  You will have one of those rare positions of working with a market leading product and an Open Source community of users and contributors. The skill set and expectations are…

Responsibilities

  • Take on key projects related to the development, enhancement and maintenance of Couchbase’s products built on the JVM platform core-io including the Java SDK and new platforms we add.  Create, enhance and maintain to other JVM related projects such as the Kotlin client, the Spring Data Connector and others.
  • Contribute to the creation, enhancement and maintenance of documentation and samples that demonstrate how Java based languages and platforms work with Couchbase.
  • Create, enhance and maintain various documentation artifacts designed to make it easy for developers and system architects to quickly become productive with Couchbase.
  • Maintain, nurture and enhance community contributions to the Couchbase community and forums from the overall Couchbase community.
  • Work with the growing community of developers who will want to know how to develop Java, Kotlin, Spring, .NET, Node.js, PHP, Python and higher level frameworks with applications built on Couchbase.

Qualifications

  • The right person for this role will be a self-motivated, independent, and highly productive individual, with ability to learn new technologies and become quickly proficient.
  • Must have a minimum of 5 years of software development experience in a professional software development organization.  Ideally, this would be working on platform level software.
  • Should be familiar with modern, reactive, asynchronous software development paradigms such as Reactor and Reactive Streams.
  • Should have experience with binary streaming wire protocols, such as those in Couchbase.  Experience with streaming protocols based on Apache Avro and data formats such as those in Apache Kafka would be good.
  • Should have familiarity with web application development beyond Spring Framework, such as in Play Framework or others.  The ideal candidate would have familiarity with web application or mobile integration development in at least one other platform such as .NET or Java.
  • Must be familiar with consuming and producing RESTful interfaces.  May be familiar with GraphQL interfaces as well.
  • Would ideally be able to demonstrate experience in large scale, distributed systems and understand the techniques involved in making these systems scale and perform.
  • Has the ability to work in a fast paced environment and to be an outstanding team player.
  • Familiarity with distributed networked server systems that run cross-platform on Linux and Windows is highly desired.
  • Experience with git SCM, and tools such as Atlassian, JIRA and Jenkins CI are also strongly desired.
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Messaging Systems Architect
Ockam  
scala design Nov 19 2019

We are seeking an Elixir/Erlang Systems Architect with expertise designing and building high throughput, concurrent, real time messaging and streaming systems. You should have deep experience with Erlang, Elixir, Scala or similar actor model based languages/tools for building fault tolerant distributed systems. Experience with the core internal design of systems like Kafka, RabbitMQ, Spark Steaming, Phoenix Channels, Akka or Riak are also required.

Responsibilities

    • Collaborate with the team with well communicated and documented processes
    • Develop high-quality software design and architecture
    • Identify, prioritize and execute tasks in the software development lifecycle
    • Develop tools and applications by producing clean, efficient code
    • Automate tasks through appropriate tools and scripting
    • Review and debug code
    • Perform validation and verification testing
    • Document development phases and monitor systems
    • Ensure software is up-to-date with the latest technologies

Requirements

    • Extensive engineering experience across multiple systems with 10+ years of experience.
    • Comfort in switching between multiple programming languages.

Remote candidates are encouraged to apply. Ockam is a distributed, remote-first structured team with a headquarters in San Francisco California.

Share this job:
Code Challenge Reviewer
Geektastic   $0K
java python javascript ruby css scala Nov 15 2019

Fancy earning extra cash reviewing code challenge submissions from any location?

We pay you £25 for each code challenge you review (30 minutes review time).  You can do as many or as few as you want per week.

We are looking for highly talented Java, JavaScript, PHP, Python, C#, Ruby, Scala, iOS and Android developers

Please read some comments made by our reviewers on Quora here 

We pay you via Transferwise, Revolut or Payoneer at the end of the month (unless you are in the UK, in which case we bank via bank transfer). 

To become part of the team you just need to register with us at Geektastic and take some code challenges. These are reviewed by our expert team (we need to know how great you are :))

Once you are part of the distributed team you will then be notified on our Slack channel when a new challenge is ready to be reviewed.

Feel free to email hello@geektastic.com if you have any questions

Share this job:
Senior Product Engineer: Back end
x.ai  
scala python senior javascript aws api Nov 13 2019

We are building some really exciting sh*t at x.ai

At x.ai, we're building artificial intelligence super-powered productivity software. The software schedules meetings for our customers automatically, without subjecting them to the typical back and forth over email negotiating when and where to meet someone. We're looking for a self-motivated and enthusiastic individual to join us on the journey in building this new frontier. You’ll get to work side by side with a group of focused and passionate individuals in a fully distributed setting.


Responsibilities

  • Work with product team to identify and define features that solve customer pain in a manner that’s easy to understand and explain
  • Leverage your Scala expertise to drive product design implementation and improvements
  • Iterate on ideas quickly from proof of concept to the final version
  • Become deeply familiar with the challenges we’re solving for customers and the technical approaches we’ve taken
  • Test the software you build, define edge cases and monitor system health in production
  • Identify and build metrics or tools to help us understand customer behavior
  • Define and champion best practices of software development
  • Take ownership of our technical stack: help improve documentation or find ways to make it easier to work on our system
  • Lead and collaborate in technical decision-making
  • Be able to manage your own time while making sure to communicate the status of your projects

Qualifications

  • 5+ years of relevant experience
  • Expert in Scala
  • Expert in API integrations
  • Experience in Typescript a plus
  • Experience in Javascript a plus
  • Experience in Python a plus
  • MongoDB, AWS, Mesos experience a plus
  • Customer obsessed
  • Thrives in a fully remote setting
Share this job:
Senior Scala Engineer
scala senior docker aws Nov 11 2019

This is an opportunity to work as part of a distributed technology team along with our product team to help define and deliver solutions for our clients.

Here are some of the qualities we’re looking for in a successful team member:

  • You strive to make everything around you better.
  • You are equally excited by experimenting with new technologies as you are about delivering value through maintainable, scalable, and reliable services.
  • You view software engineering less as writing code and more as delivering high-value, innovative solutions to real-world problems.
  • Some knowledge of corporate bonds is desired, but not mandatory for delivering the majority of our features.
  • You are skilled in concurrency, distributed message-based systems, and have a deep affinity for building reliable, high-throughput, lower latency solutions.
  • You can clearly communicate your ideas and give and accept direct feedback.
  • You are passionate about honing your craft inside and outside of work.
  • You can convey why you are attracted to working in a functional paradigm.

Our stack:

  • Scala with Akka Streams for efficient stream processing
  • Kafka for scalable messaging
  • Linux, Docker, Ansible, and AWS for dynamic environments
  • Google Apps, Slack, and Zoom for open communication
Share this job:
Senior Product Engineer: Front end
x.ai  
javascript node-js scala python senior aws Nov 08 2019

We are building some really exciting sh*t at x.ai

At x.ai, we're building artificial intelligence super-powered productivity software. The software schedules meetings for our customers automatically, without subjecting them to the typical back and forth over email negotiating when and where to meet someone. We're looking for a self-motivated and enthusiastic individual to join us on the journey in building this new frontier. You’ll get to work side by side with a group of focused and passionate individuals in a fully distributed setting.

Responsibilities

  • Work with product team to identify and define features that solve customer pain in a manner that’s easy to understand and explain
  • Iterate on ideas quickly from proof of concept to the final version
  • Become deeply familiar with the challenges we’re solving for customers and the technical approaches we’ve taken
  • Leverage your Javascript expertise to drive product design implementation and improvements
  • Test the software you build, define edge cases and monitor system health in production
  • Identify and build metrics or tools to help us understand customer behavior
  • Define and champion best practices of software development
  • Take ownership of our technical stack: help improve documentation or find ways to make it easier to work on our system
  • Lead and collaborate in technical decision-making
  • Be able to manage your own time while making sure to communicate the status of your projects

Qualifications

  • 5+ years of relevant experience
  • Expert in Node.js
  • Expert in building web apps
  • Expert in API integrations
  • Experience in Typescript a plus
  • Experience in Scala a plus
  • Experience in Python a plus
  • MongoDB, AWS, Mesos experience a plus
  • Customer obsessed
  • Thrives in a fully remote setting
Share this job:
Data Engineer-Remote
python scala big data aws design healthcare Nov 08 2019

Description

SemanticBits is looking for a talented Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

SemanticBits is a leading company specializing in the design and development of digital health services, and the work we do is just as unique as the culture we’ve created. We develop cutting-edge solutions to complex problems for commercial, academic, and government organizations. The systems we develop are used in finding cures for deadly diseases, improving the quality of healthcare delivered to millions of people, and revolutionizing the healthcare industry on a nationwide scale. There is a meaningful connection between our work and the real people who benefit from it; and, as such, we create an environment in which new ideas and innovative strategies are encouraged. We are an established company with the mindset of a startup and we feel confident that we offer an employment experience unlike any other and that we set our employees up for professional success every day.

Requirements

  • Bachelor’s degree in computer science (or related) and two to four years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Scala, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken English
  • Self-driven problem solver

Benefits

  • Generous base salary
  • Three weeks of PTO
  • Excellent health benefits program (Medical, dental and vision)
  • Education and conference reimbursement
  • 401k retirement plan. We contribute 3% of base salary irrespective of employee's contribution
  • 100% paid short-term and long-term disability
  • 100% paid life insurance
  • Flexible Spending Account (FSA)
  • Casual working environment
  • Flexible working hours
  • Self-driven problem solver

SemanticBits, LLC is an equal opportunity, affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, or any other characteristic protected by law. We are also a veteran-friendly employer.

Share this job:
Solutions Architect
phData  
scala java big data cloud aws testing Nov 05 2019

If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for three (3) consecutive years.   

As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and a long term incentive plan for employees. 

As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

  • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

  • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

  • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

  • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

  • Design and implement streaming, data lake, and analytics big data solutions


  • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines


  • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths


  • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)


  • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software


  • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

  • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

Technical Leadership Qualifications


  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics


  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  


  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc


  • Expert programming experience in Java, Scala, or other statically typed programming language


  • Ability to learn new technologies in a quickly changing field


  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries


  • Excellent communication skills including proven experience working with key stakeholders and customers

Leadership


  • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics


  • Experience scoping activities on large scale, complex technology infrastructure projects


  • Customer relationship management including project escalations, and participating in executive steering meetings

  • Coaching and mentoring data or software engineers 
Share this job:
Senior Type-System Engineer
Luna  
java scala senior ux design Nov 03 2019

Senior Type-System Engineer
Luna is looking for a senior type-system engineer to help build the next generation interpreter and runtime for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you have strong technical skills and a passion for all things compiler, then this role could be the one for you.

As a type-system engineer you'll work as part of the compiler team to design and implement Luna's new type system, including its underlying theory, type-checker, and inference engine. This wok is _intrinsic_ to Luna's evolution, and will provide you with the opportunity to collaborate with a world-class team of engineers, community managers, and business developers (with experience at Bloomberg, GitHub, PayPal, to name a few), making your mark on Luna's future.

What You'll Do
As a senior type-system engineer, you'll be working on the design and development of Luna's new type-system, in conjunction with the rest of the compiler team, to help support the language's evolution. This will involve:

  • Determining and formalising the theoretical underpinnings of the new type system in a way as to ensure its soundness.
  • Both theoretical and practical treatments of the theory behind Luna's type system.
  • Working with the broader compiler team to implement the type-checking and type-inference engines as part of the greater interpreter.
  • Using the type-system's information to improve the interpreter's functionality and performance, as well as how it interacts with the users.

The Skills We're Looking For
We have a few particular skills that we're looking for in this role:

  • Practical and rich experience writing code in a functional programming language such as Haskell or Scala, including experience with type-level programming techniques (3+ years).
  • Experience working with the theory behind powerful type systems, including row types, type-checking and type-inference algorithms, and dependently-typed systems.
  • Practical experience building real-world type-systems, including facilities for both type-checking and inference.
  • An awareness of the UX impacts of type-systems, and a willingness to minimise their often-intrusive nature.
  • Practical experience in building large and complex software systems.

It would be a big bonus if you had:

  • Experience writing Java and Scala code, as these will be used to implement the type-system.
  • Experience in writing comprehensive regression tests for both type-inference and type-checking systems.

Avoid the confidence gap. You don't have to match all of the skills above to apply!

Who You'll Work With
You'll be joining a distributed, multi-disciplinary team that includes people with skills spanning from compiler development to data-science. Though you'll have your area to work on, our internal culture is one of collaboration and communication, and input is always welcomed.

We firmly believe that only by working together, rather than putting our team members in their own boxes, can we create the best version of Luna that can be.

The Details
As part of the Luna team you'd be able to work from anywhere, whether that be at home, or on the go! We have team members distributed across the world, from San Francisco, to London, to Kraków. We welcome remote work and flexible schedules, or you can work from the Kraków office (or our planned SF office) if you'd like. We can provide competitive compensation and holiday, as well as the possibility
of equity as time goes on.

How To Apply?
Send us an email at jobs@luna-lang.org, and tell us a little bit about yourself and why you think you'd be a good fit for the role! You can also tell us about:

  • Some of your past work or projects.
  • Why you'd like to work on Luna, and where you imagine Luna being in 5 years.
  • The most important features of a team that you'd like to work in.
  • Whether you take pride in your ability to communicate clearly and efficiently with your team.
Share this job:
Senior Software Engineer
python mysql scala senior postgresql frontend Oct 31 2019

Invite is a healthcare technology company that leverages genetic information to empower doctors and patients to make informed medical decisions. Our software engineers work on a variety of projects ranging from innovations in healthcare systems to taming the chaos of biology. We're constantly improving our tools and technologies to deliver the highest quality actionable information for patient health. If you want to apply your knowledge and skills to improve the lives of millions of people join our team.

About our team:

Invitae needs experienced engineers with diverse backgrounds to help us achieve our mission - provide genetic information to billions of people.  We are a cross-functional team of scientific domain experts and dedicated, curious engineers. We build systems that take massive amounts of genomic data, combine it with the world's scientific literature, add to it years of rigorously curated results, and package it all neatly for our scientists to consume. It's a lot of information. As the data gets bigger, our systems need to get better and faster. That's where you come in.

What you will do:

  • Help define and build new features or applications based on technology and business needs.
  • Minimum 5 years experience and Bachelors in Engineering
  • Write structured, tested, readable and maintainable code.
  • Participate in code reviews to ensure code quality and distribute knowledge.
  • Lead technical efforts for internal or external customer needs.
  • Support your teammates by continuing to learn and grow.

What you bring:

  • Industry experience with full stack architecture and distributed systems.
  • Multiple years of industry experience with backend or frontend frameworks such as:
    • Python/Django
    • JavaScript/React
    • Scala/Play
    • Other common industry standards
  • Hands-on experience with databases (MySQL, PostgreSQL, NoSQL, etc.).  Tuning and query optimization a plus.
  • Top-notch communication skills.  Experience with distributed teams is a plus.
  • A mission-oriented mindset and a desire to never stop learning.

At Invitae, we value diversity and provide equal employment opportunities (EEO) to all employees and applicants without regard to race, color, religion, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.

Share this job:
REMOTE Sr. Scala Engineer- Redis and Postgres REQUIRED, Sorry, No Visas
Surge  
scala redis postgresql big data cloud aws Oct 31 2019

Surge Forward is looking for smart, self-motivated, experienced, senior-level consultants who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

TECHNICAL REQUIREMENTS:

EST Hours Required, Must live in the US or Canada to be considered. Sorry, NO Visas.

• Proficiency with Scala
• Proficiency with PostgreSQL, Redis
• Experience in data processing formats Like Json and Parquet
• Familiar with SBT and Docker
• Knowledge or experience about Restful API
• Proficiency in source code management, software development, Unix tools and terminal
• Foundational knowledge of algorithms, networking, concurrency, file systems

NICE TO HAVE:
• Experience with software design patterns
• Understanding of Kubernetes
• Experience with business intelligence and digital analytics

QUALIFICATIONS:
• Experience and capability to design, specify and build a data API
• Solid understanding of large system design, streaming big data and performance trade-offs
• Deep experience with SQL and multiple Relational Database Management Systems with non-trivial databases.
• Demonstrated expertise in AWS cloud computing, and understanding of data management best practices
• Strong communication, verbal and written skills
• Keen to take the leadership of the expertise domains
• Solution-oriented, able to implement prototypes with new tools quickly
• 3-5+ years of experience in software engineering and data analytics
• Bachelor’s/Master’s degree in computer science, or a related field

RESPONSIBILITIES:
• Develop and maintain software components to enhance, transform and model raw data
• Develop and maintain a highly available and scalable real-time API
• Investigate and resolve performance bottlenecks, data quality issues and automation failures
• Implement and manage continuous delivery systems and methodologies
• Implement tests and validation processes to maintain the code and data quality


Only candidates located in the immediate area can be considered at this time. Sorry, No Visas.

Resume must include the tech stack under each job directly on the resume in order to be considered.

For immediate consideration, email resume and include your cell phone number and start date: jobs@surgeforward.com

Share this job:
Senior Application Security Engineer
Redox  
node-js aws javascript senior java python Oct 26 2019

Are you a Senior Application Security Engineer who is passionate about empowering engineering teams to build secure software? Redox is searching for an exceptionally talented Senior Application Security Engineer to join our Security Team. In this role, you will set the direction for our application security processes, tools, and capabilities. Redox is an engineering-first company, building the future of healthcare information exchange, the platform to help power healthcare companies and applications to work together!

Responsibilities:

    • Be an active voice in our small, focused security team as the primary engineer responsible for Application and Product Security.
    • Empower Redox to reduce avoidable vulnerabilities introduced into code, reduce the time to detect vulnerabilities that do exist, and mitigate vulnerabilities detected as quickly as possible.
    • Approach securing our company pragmatically, empathizing with engineers, developers and security champions to understand their needs.
    • Perform risk assessments, threat models and code reviews for our application.
    • Communicate issues and progress on complex problems in terms easily understood by stakeholders.
    • Coordinate and manage our penetration testing and bug bounty programs.
    • Support and build valuable training activities that uplift developer awareness of secure coding practices.
    • Build and maintain tools that detect potential security issues within our development pipeline.
    • Maximize security impact and reduce risk while minimizing the negative impact on our businesses and developer velocity.
    • Mentor and guide engineering teams on best practices for keeping our applications secure.

Background and Experience Requirement:

    • Knowledge of current application security vulnerabilities, how to detect them, how to prevent them and how to create awareness of them.
    • Proficiency and hands-on experience using tools to which can detect security vulnerabilities, both statically and dynamically.
    • Experience securing Javascript, NodeJS and Typescript applications.
    • Experience with containerized and application mesh architectures.
    • Ability to communicate complex security threats and risks into simple terms for non-security (and even non-technical) stakeholders.
    • Development experience in at least two high-level languages such as NodeJS, Python, Ruby, C#, Scala, Java, etc.
    • Experience running threat modeling sessions with engineering teams.

Bonus Points:

    • Securing applications based on AWS Technologies
    • Offensive security (OSCP) certifications
    • Docker/K8 hardening experience
Share this job:
Software Engineer- All Levels
mysql java ruby c scala linux Oct 26 2019

Software Engineers (Database Internals, Systems, Storage, Networking) - All Levels - (Senior/Lead/Principal) (Multiple Locations) Note: By applying to the Software Engineers, posting recruiters and hiring managers across the organization hiring Software Engineer will review your resume. Our goal is for you to apply once and have your resume reviewed by multiple hiring teams.

Locations - Burlington MA, Indianapolis IN, San Francisco CA, Bellevue WA, Herndon VA, Vancouver BC Canada

About Salesforce Technology, Marketing & Product Engineering

Our Technology, Marketing & Product Engineering team is responsible for innovating and maintaining a massive distributed systems engineering platform that ships hundreds of features to production for tens of millions of users across all industries every day. Our users count on our platform to be highly reliable, lightning fast, supremely secure, and to preserve all of their customizations and integrations every time we ship. Our platform is deeply customizable to meet the differing demands of our vast user base, creating an exciting environment filled with complex challenges for our hundreds of agile engineering teams every day.

(Check out our "We are Salesforce Engineering" video
We are Salesforce Engineering

Are you a database expert, passionate about building technology that supports staggering growth and innovation? Join the teams that build the critical services that keep our databases and applications running smoothly. You will help deliver game-changing technology that will enable capacity management through scalable, intelligent data migration. We are looking for exceptional developers at all levels to take on big challenges and innovate on our database technology. Your Impact:

  • Lead design and development of the core database system functionalities
  • Come up with innovative ideas to improve performance and scalability in a large-scale platform
  • Implement comprehensive functional and system-test for your area of responsibility and for overall database quality
  • Debug, conduct root cause analysis, diagnose defects
  • Actively participate in the release and deployment process

Requirements:

  • Expertise in object-oriented programming in any of the following languages: Java, C++, C, C#, Ruby, Go, Scala, Python
  • Experience building a high-performance large-scale platform
  • Strong understanding of data structures, design patterns, concurrency, and scalability
  • Experience in a UNIX/Linux data center environment with fluency in command line interfaces and shell scripting
  • Strong understanding of schema design and SQL development
  • Experience with relational database internals and systems development
  • Experience with scaling MySQL at significant levels including sharding and master/master replication
  • Develop scalable, resilient and fault tolerant transactional and distributed systems

Preferred Requirements:

  • Experience developing test automation frameworks for complex systems
  • Experience with database catalog, upgrade, query execution, query optimization
  • PostgreSQL software development experience and community involvement as a contributor
  • Experience with highly concurrent multi-threaded/multi-process data structures and design
  • Performance measurement, analysis, and optimization

Education:

  • MS in Computer Science or related field, or
  • BS in Computer Science plus relevant job-related experience

Are you an upcoming or recent graduate (within the past 2.5 years)? Please check out our FutureForce program at www.salesforce.com/futureforce. We appreciate your interest but we are seeking industry experienced engineers.

Salesforce, the Customer Success Platform and world's #1 CRM, empowers companies to connect with their customers in a whole new way. The company was founded on three disruptive ideas: a new technology model in cloud computing, a pay-as-you-go business model, and a new integrated corporate philanthropy model. These founding principles have taken our company to great heights, including being named one of Forbes’s “World’s Most Innovative Company” five years in a row and one of Fortune’s “100 Best Companies to Work For” eight years in a row. We are the fastest growing of the top 10 enterprise software companies, and this level of growth equals incredible opportunities to grow a career at Salesforce. Together, with our whole Ohana (Hawaiian for "family") made up of our employees, customers, partners and communities, we are working to improve the state of the world.

*LI-Y

Salesforce information

We are Salesforce Engineering

Salesforce FY18 Year in Review 

Salesforce Ohana Culture

Salesforce Engineering behind the cloud
https://medium.com/salesforce-engineering

Posting Statement

Salesforce.com and Salesforce.org are Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Headhunters and recruitment agencies may not submit resumes/CVs through this Web site or directly to managers. Salesforce.com and Salesforce.org do not accept unsolicited headhunter and agency resumes. Salesforce.com and Salesforce.org will not pay fees to any third-party agency or company that does not have a signed agreement with Salesforce.com or Salesforce.org.

Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.

Share this job:
Lead Software Engineer
scala aws python machine learning big data junior Oct 24 2019

X-Mode Social, Inc. is looking for a full-time lead software engineer to work on X-Mode's data platform and join our rapidly growing team. For this position, you can work in either remotely anywhere in the U.S. or in our Reston, VA headquarters. Our technical staff is scattered across the U.S, so you'll need to be comfortable working remotely. We often use videoconferencing tools (like Slack, Google Meet) to coordinate, as well as Jira for tasking, and Bitbucket for source control. We work in short sprints, and we'll count on you to provide estimates for tasks to be completed and delivered. 

WHAT YOU'LL DO:

  • Use big data technologies, processing frameworks, and platforms to solve complex problems related to location
  • Build, improve, and maintain data pipelines that ingest billions of data points on a daily basis
  • Efficiently query data and provide data sets to help Sales and Client Success teams' with any data evaluation requests
  • Ensure high data quality through analysis, testing, and usage of machine learning algorithms

WHO YOU ARE:

  • 3-5+ years of Spark and Scala experience
  • Experience working with very large databases and batch processing datasets with hundreds of millions of records
  • Experience with Hadoop ecosystem, e.g. Spark, Hive, or Presto/Athena
  • Real-time streaming with Kinesis, Kafka or similar libraries
  • 4+ years working with SQL and relational databases
  • 3+ years working in Amazon Web Services (AWS)
  • A self-motivated learner who is willing to self-teach
  • Willing to mentor junior developers
  • Self-starter who can maintain a team-centered outlook
  • BONUS: Experience with Python, Machine Learning
  • BONUS: GIS/Geospatial tools/analysis and any past experience with geolocation data

WHAT WE OFFER:

  • Cool people, solving cool problems.
  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • We value your input. This is a chance to get in on the "ground floor" of a growing company
Share this job:
Cloud Native Java Developer
java cloud javascript scala Oct 24 2019

Cloud Native Java Developer (Remote United States)

At Railroad19, we develop customized software solutions and provide software development services. 
We are currently seeking a Cloud Native Java Developer that is fluent in both Spring Boot and Java 8 to be a technical resource for the development of clean and maintainable code. In addition to contributing code and tangible deliverables the role is expected to work as an adviser to help identify, educate, and foster best-in-class solutions. Creating these relationships requires strong communication skills.

At Railroad19, you are part of a company that values your work and gives you the tools you need to succeed. We are headquartered in Saratoga Springs, New York, but we are a distributed team of remote developers across the US. 
This is a full-time role with vacation, full benefits and 401k.  Railroad19 provides competitive compensation with excellent benefits and a great corporate culture.

The role is remote - U.S. located, only full time (NO- contractors, Corp-to-Corp or 1099).  
Core responsibilities:

  • Understand our client's fast-moving business requirements
  • Negotiate appropriate solutions with multiple stake-holders
  • Write and maintain scalable enterprise quality software
  • Build web applications using Spring Boot
  • Build Microservices that connect to Oracle and NoSQL databases
  • Build software components that integrate with a workflow engine and/or ESB to execute asynchronous business processes
  • Manage the complete software development life cycle
  • Writing functional and unit tests in order to maintain code quality
  • Work with Jenkins to perform continuous integration
  • Collaborate with other teams in order to deliver a highly performance application that contains little or no defects
  • Identify new opportunities, tools, and services to enhance the custom software platform
  • Support and troubleshoot issues (process & system), identify root cause, and proactively recommend sustainable corrective actions

Skills & Experience:

  • Advanced Java development experience
  • Hands on experience with Java 8 (especially streaming collections and functional interfaces)
  • Hands on with Scala is a plus
  • Hands on experience with NoSQL technologies is a plus
  • Hands on experience with Spring Boot, Spring Cloud, and Netflix OSS is a plus
  • Hands on experience with Oracle, ETL
  • Hands on experience with AngularJS and/or similar JavaScript frameworks is a plus
  • Demonstrates willingness to learn new technologies and takes pride in delivering working software
  • Excellent oral and written communication skills
  • Experience participating on an agile team
  • Is self-directed and can effectively contribute with little supervision
  • Bachelor's or master's degree in computer science, computer engineering, or other technical discipline; or equivalent work experience

No Agencies***

Share this job:
Senior Scala Engineer
Luna  
scala senior saas cloud design Oct 17 2019

Overview

Luna is looking for a senior Scala software engineer to take charge of the design, development, and evolution of the new SaaS offering for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you bring strong technical skills and have a passion for collaboration, this role could be for you.

As a senior Scala engineer, you'll be leading the effort to design and develop our new SaaS offering, providing a web-based version of Luna to our clients. Your work will be integral to the next phase of Luna's development, as we expand our offering beyond the open-source project. You'll be able to work with a world-class team of skilled engineers, community managers, and business developers (from Bloomberg, GitHub and PayPal to name a few), and put your indelible stamp on Luna's future.

What You'll Do

As a senior scala software engineer you'll be in charge of building the SaaS offering for Luna, hosting both the language and its IDE in the cloud. This will involve:


  • Working closely with the internal teams to design a secure and scalable SaaS architecture.


  • Developing a SaaS solution based upon that design with robust tooling and reliability, as well as inbuilt support for collaboration.


  • Contributing to the evolution of this vibrant open-source project by bringing a new component to its ecosystem and product offering.

The Skills We're Looking For

We have a few particular skills that we're looking for in this role:


  • Solid understanding of Scala language elements of functional programming.


  • Understanding of immutability.


  • JVM basics, memory model, threads, principles of work knowledge.


  • Data structures and basic algorithms knowledge.


  • Systems design & networking (understanding basic principles) .


  • Experience working with Git and Linux.


  • Scala on backend: servers, web.


  • Akka library: actors, streams, http.


  • Git and source management flows, deployment.


  • Cats, cats-effect libraries .


  • Messaging systems, protocols and design patterns .

It would be a big bonus if you also had:


  • Experience with Cloud computing architectures (AWS, Google Cloud).


  • Experience with container technologies (Docker, Kubernetes, etc.).


  • Experience working in close conjunction with multiple product teams to ensure that the solutions you provide meet their needs.

Avoid the confidence gap. You don't have to match all of the skills above to apply!

Share this job:
Remote Senior Data Engineer
Hays  
scala senior python docker aws testing Oct 16 2019
Hays Specialist Recruitment is working in partnership with Security Scorecard to manage the recruitment of this position

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply.

This position is NOT eligible for subcontractors or those that require sponsorship.

Hays is conducting an exclusive search for a Senior Data Engineer, for a Cybersecurity company based in NYC. Security Scorecard builds a very unique product that rates cybersecurity postures of corporate entities through the scored analysis of cyber threat intelligence signals for the purposes of third party management and IT risk management. They have a very modern Technology stack and work in a dynamic & agile environment.

The position is a 100% remote and you'll be responsible for the management of the Analytic pipeline using Spark, Hardoop etc. Leverage cutting-edge technologies to support new and existing services and processes, drive projects through all stages of development and improving the effective output of the engineering team by managing quality and identifying inconsistencies. Your experience should involve 5+ years with Scala or another functional language (commercial environment preferred), 3+ years with Spark and the Hadoop ecosystem (or similar frameworks), Familiarity with tools like AWS and Docker, experience working with 3rd party software and Expert skills with SQL

Remote Senior Data Engineer - Perm - New York, NY

Remote Senior Data Engineer Skills & Requirements

Responsibilities
* Manage the analytic pipeline using Spark, Hadoop, etc
* Leverage cutting-edge technologies to support new and existing and services and processes.
* Quickly and efficiently design and implement in an agile environment
* Work with other team members to implement consistent architecture
* Drive projects through all stages of development
* Actively share knowledge and responsibility with other team members and teams
* Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.

Requirements:
3+ years of experience with:
* Scala or Python, both preferred
* Distributed systems (e.g. Spark, Hadoop)
* Database systems (e.g. Postgres, MySQL)
Experience with the following is preferred:
* IP (v4/v6) allocation and addressing conventions
* DNS conventions and best practices
* Anti-abuse investigations
* Bachelor's degree (CS, CE/EE, Math, or Statistics preferred)

Why Hays?

You will be working with a professional recruiter who has intimate knowledge of the Information Technology industry and market trends . Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there's a position you really want, you're fully prepared to get it. Additionally, if the position is a consulting role, Hays offers you the opportunity to enroll in full medical, dental or vision benefits.

* Medical
* Dental
* Vision
* 401K
* Life Insurance ($20,000 benefit)

Nervous about an upcoming interview? Unsure how to write a new resume?

Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting.

Hays is an Equal Opportunity Employer.

Drug testing may be required; please contact a recruiter for more information.

Share this job:
Remote Senior Engineer Scala/JavaScript/React
scala aws javascript senior saas docker Oct 15 2019

THE OPPORTUNITY

We are a young, lean, funded AngelPad company looking for an experienced Senior Full-stack Engineer with experience building SaaS products. We are a fully distributed team with people working all around the world. You will have the flexibility and freedom to work in the environment of your choosing, whether that be at home, a cafe or co-working space. We're looking for a senior engineer to help build our API-First core app using Scala/Play 2 and JavaScript/React.

ABOUT YOU

You love building new things. A large part of what you’ll do each day is building and maintaining Process Street’s core workflow engine and platform. This is a full-stack position, so you’ll be working across the entire Process Street codebase. You'll be working on backend development in Scala. You'll also be working with frontend tech in JavaScript and TypeScript. Process Street is built on Play 2, AngularJS 1, React, Redux, and AWS. We use modern tools, which means you’ll have the opportunity to work with software like PostgreSQL, Redis, CircleCI, Docker and much more.

You love shipping to customers. Your engineering projects will focus on understanding customers' needs and translating those needs from product specifications into functional, production ready code. You'll have the opportunity to ship code daily that will be used by hundreds of thousands of people.

WHAT YOU’LL DO

  • Plan and build product features - directly impact how our customers can be more productive.
  • Experiment: this is a startup so everything can change.
  • Ship code to hundreds of thousands of users every week.
  • Participate in code reviews and help to guide software architecture decisions.
  • Mentor and learn from the engineers, product managers, and designers on your team.

WHAT YOU’LL BRING

  • 5+ years of software development experience
  • Experience with building and maintaining a SaaS product at scale
  • An affinity for creating software that is extensible, performant, and easy to read
  • A degree in computer science, software engineering, or a related field
Share this job:
Senior Backend Engineer
Rollbar  
aws python scala node-js senior backend Oct 10 2019

At Rollbar, we help developers build better software faster - and make their lives easier. We are a small team based in San Francisco with big ambition and a global presence. Over 100,000 developers use our product to power all kinds of applications that affect people’s lives and livelihoods. Rollbar is used by some of the best engineering teams in the world, including Twilio, Salesforce, Zendesk, Instacart and Twitch.  We are looking for an experienced Backend Engineer to join our Scale team to not only help build and run our systems and services, but help define backend engineering at Rollbar.

Our tech stack:

    • React, Webpack, Sass
    • Python, Node.js, Scala
    • MySQL, Elasticsearch, Redis, Memcache, Spark
    • Google Cloud Platform, Kubernetes, Terraform, Ansible, Consul, CircleCI, Rollbar

You will:

    • Work with other engineers to design and build highly available and scalable systems
    • Measure and monitor system performance, availability, and reliability
    • Implement performance improvements to our processing pipeline
    • Be in the on-call rotation and the first line of defense for major infrastructure issues
    • Help improve the tools we use to build and run Rollbar

You have:

    • 5+ years experience in a software engineering or SRE role
    • Experience building and scaling real-time streaming data pipelines
    • Experience operating services running on cloud providers like AWS or GCP
    • Attention to detail and a methodical approach - ensuring things rarely fall through the cracks
    • BS in Computer Science or equivalent work experience

Benefits and perks

    • Rapid career growth opportunities
    • Competitive salary and stock options
    • Medical, dental and vision health benefits
    • Parental leave - 12 weeks 
    • Generous hardware and software allowance
    • Casual work environment
    • Inclusive team-oriented culture
    • Have fun and make an impact
Share this job: