Remote big-data Jobs

This Month

Senior Partner Solution Architect
 
senior java python javascript big data ios May 20
We work with the world’s biggest enterprise customers focused on leading a revolution to transform their organizations to take advantage of the digital economy. The list includes Amadeus, Concur, eBay, GE, LinkedIn, and many others. You can learn more here: www.couchbase.com/customers

Are you an individual who is customer focused, innovative, solution oriented and enjoys working with technology partners and global SIs? If so, read on. Couchbase is looking for a talented Senior Partner Solution Architect with expertise in databases, big data, cloud and/or mobile technologies to support our product and partner organization. This position will cover a variety of exciting technologies, including big data, mobile, IoT, containers & orchestration, DevOps and cloud technology ecosystem partners.

Responsibilities

  • Working with partners to create technical integrations and/or end-to-end solutions between their products and Couchbase. Examples include: Red Hat/IBM, Grafana Labs and Prometheus, Informatica, Confluent/Kafka, Databricks/Spark, Elasticsearch, VMware/Pivotal/Spring, and public Cloud providers
  • Assisting our customers to deploy partner integrations and solutions
  • Supporting our direct sales teams when they leverage partner solutions at customers
  • Creating technical and marketing collateral describing partner integrations and/or solutions
  • Developing and delivering exceptional company/product presentations and demonstrations 1:1 and 1:many
  • Working with product management and engineering to drive enhancements to the product 
  • Engagement of the partner community through attendance at technology events, writing blog posts and delivering presentations at trade shows
  • Partner education and maximizing Couchbase’s success through effective coaching and product positioning

Requirements

  • 10+ years working in a customer facing position such as presales, post-sales or consulting
  • 10+ years experience in traditional RDBMS or NoSQL databases, including data modeling. Direct exposure to Couchbase, Cassandra, MongoDB, Aerospike, Redis and Hadoop/HBase is preferable, but not required
  • 10+ years experience with Linux, Windows and their ecosystems, including Bash, Python and GitHub
  • Familiarity with programming languages such as Go, Python, Javascript, Java, .NET or Objective C
  • Bachelor or Master's degree in Computer Science or a related field
  • Strong communication and presentation skills with an ability to present complex solutions concisely 1:1 and to a large audience
  • Fluency in speaking to the full range of IT stakeholders including the IT Director / CIO level
  • Enthusiastic and knowledgeable about some established and emerging trends across the cloud ecosystem. 
  • Continuously learning about exciting new technologies like Kubernetes, Apache Camel, Prometheus, AWS Lambda, OpenWhisk, Kafka, Spark, Quarkus, and Spring Data, among other Cloud Native Computing Foundation projects
  • Passionate about the mobile and IoT ecosystem, including Android, iOS, field gateways and distributed systems with intermittent connectivity
  • Good knowledge of data center architecture covering multi datacenter and global deployments
  • Organized and analytical, able to thrive under pressure with outstanding time management skills
  • Creative and adaptive approach to removing obstacles and accelerating the integration efforts
  • Ability to travel to both partner and customer sites 25% or more
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Software Engineering ArchitectCharitable Donations & Payments
api python php jenkins java big data May 20

We are Givelify®, where fintech meets philanthropy. We help people instantly find causes that inspire them to action so they can change the world—one simple, joyful gift at a time. 

The Software Engineering Architectis tasked with developingbuilding payment systems on scale. At our core we enable our Donors give to causes and organizations they are most passionate about. You will build systems that securely facilitate movement of money through the credit, debit and ACH networks. You will build merchant on-boarding, verification,KYC,reporting systems. You will help develop and implement financial fraud detection systems.  

Some of the meaningful work you will perform:

  • Build payment systems on scale.Build systems that can helpwith the movement of money through the credit, debit and ACH networks.Build merchant on-boarding, verification, KYC, reporting systems. Assist in the development and implementation of financial fraud detection systems.
  • Write software that collects and queries data, compose queries for investigation and analysis.Collect a lot of data in real time from our applications and compose ad hoc queries, which is necessary to develop and support our products.
  • Architecting andbuilding APIs and libraries that otherproducts and engineers will consume
  • Participate in and guide engineer teams on all things technical – Architecture definition & Design ownership that not only include technology but Data Security aspects, Deployment & Cloud strategy, CI/CD, as well as coding best practices.
  • Understand our codebase and systems and the business requirements they implementto effectively make changes to our applications and investigate issues.
  • Serve as an effective communicatorwho can inform, explain, enable, teach, persuade, & facilitatediscussion, white boarding, & other collaboration platforms.
  • Effectively collaborate and share ownership of your team’s codebase and applications.Fully engage in team efforts, speak up for what you think are the best solutions, and be able to converse respectfully and compromise when necessary.

 We welcome your experience and talents:

  • BS/MS/PHDdegree in Computer Science, Computer Engineering,Mathematics, Physicsor equivalent Fintech work experience
  • 7+ years of building payment processingand KYC systems that connect with API’s from major payment acquirersand KYC service providers
  • Experience Building Web-servicesand API developmentAPIs for engineers
  • Technical Leader with at least 10+ years of work in Software Engineering
  • Strong object-oriented design and development skills and advanced knowledge of PHP, Python, Java or similar programming languages
  • Familiarity working in Agile/Scrumenvironments
  • Familiarity with DevOps configuration tools (Git,Jira, Jenkins, etc.)
  • Strong SQL composition skills. Knowledge of big data and NoSql databases is a plus!
  • A distinguished member of engineering community, either through extracurricular activities, publications, associations with orgs like IEEE

Our People 
 
We are a virtual team of high-performing professionals who innovate & collaborate to fulfill our mission to help people instantly find causes that inspire them to action so they can change the world – one simple, joyful gift at a time. Our culture of integrity, heart, simplicity, & that "wow" factor fuel our aspiration to be among the tech industry's most inclusive & purpose-driven work environments. 
 
We take great pride in providing competitive pay, full benefits, amazing perks, and most importantly, the opportunity to put passion & purpose to work. 
 
Our Product 
 
From places of worship to world-changing nonprofit groups, Givelify harnesses the power of technology to bridge the gap between people and the causes they care about. Tap. Give. Done. Givelify's payment solution is designed to make the experience of giving as beautiful as the act of giving. 
 
Learn more about us at https://careers.givelify.com ( https://careers.givelify.com/

Share this job:
Fraud Analyst - Fiat Team
Binance  
blockchain big data finance May 13
Please note, all positions at Binance require relevant experience. Applications without required experience will not be considered.

Binance is the global blockchain company behind the world’s largest digital asset exchange by trading volume and users, serving a greater mission to accelerate cryptocurrency adoption and increase the freedom of money for people around the world.

Are you looking to be a part of one of the most influential companies in the blockchain industry and contribute to the crypto-currency revolution that is changing the world?

Binance’s Fiat team is responsible for expanding global fiat initiatives for Binance, building and investing in the bridges that allow users from the traditional financial ecosystem to access crypto. We do this through building local fiat exchanges, seeking partnerships with banks and payment platform for servicing flat as well as integrating strategic investments, JVs and acquisitions.

Job Scope

This role is responsible for overseeing day to day fraud management activities. Conducting fraud monitoring of Binance customers using internal and external systems to reduce fraud related losses and fines from inadequate fraud risk management.

This role will work closely with product and big data team on fraud prevention with tailored risk models, rules and plans, and assist customers in chargeback (dispute), fraud monitoring programs to mitigate losses and scheme sanctions.

This position can be located in Asia, Europe 

Responsibilities

  • Maintain and monitor fraud management rules and strategies for credit cards and payment processors
  • Monitor and review suspected fraud and work with both internal and external stakeholders to conduct investigations and take appropriate actions
  • Monitor chargeback performance and provide early warning of Binance at risk of entering scheme chargeback and fraud monitoring program
  • Assist customers in case of fraud attack and provide follow up action plans
  • Work with 3rd party risk and fraud management platforms and support with development teams to create our own tools and solutions where needed
  • Support Fraud Manager to maintain and improve Binance fraud management policies, procedures and manuals

Requirements

  • 3-5 years of fraud management experience at payment processor, acquirer, bank or e-commerce platform
  • Experience in configuring and analysing results from risk and fraud management systems, strong data analytical and quantitative skills
  • Experience in dealing with card scheme, acquirer and 3rd party vendors
  • Familiar with card scheme rules, such as chargeback (dispute) and fraud monitoring program and local regulatory requirement
  • Experience in chargeback and fraud prevention with regards to cryptocurrency is a plus
  • Language: Fluent English is a must. Chinese is also a plus
  • Attention to detail and accuracy
  • Proactive, strong prioritisation and execution skills
  • Effective integration with different departments
  • Self-motivated and a good team player
Conditions
• Do something meaningful; Be a part of the future of finance technology and the no.1 company in the industry
• Fast moving, challenging and unique business problems
• International work environment and flat organisation
• Great career development opportunities in a growing company
• Possibility for relocation and international transfers mid-career
• Competitive salary
• Flexible working hours, Casual work attire
Share this job:
Senior Data Engineer
 
senior big data cloud aws May 12
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES: 

BUILD SOFTWARE SOLUTIONS USING PUBLIC CLOUD OFFERINGS, BIG DATA PROCESSING, AND STORAGE TECHNOLOGIES TO DEVELOP WORLD-CLASS DATA SOLUTION THAT POWERS CRUCIAL BUSINESS DECISIONS THROUGHOUT THE ORGANIZATION. COLLECT, EXTRACT, CLEAN, AND ORGANIZE DATA ACROSS MULTIPLE DATASETS BASED ON DEEP UNDERSTANDING OF BIG DATA CHALLENGES AND ECO-SYSTEM. MANAGE PROCESS TO MAKE DATA ACTIONABLE BY UTILIZING SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA. OWN PROBLEM END-TO-END BY MAINTAINING AN UNDERSTANDING OF THE SYSTEMS THAT GENERATES DATA AND AUTOMATES THE ANALYSES AND REPORTING BASED ON EXPERTISE IN DATA PIPELINE WITH SUCH WORKFLOW TOOLS AS AIRFLOW, OOZIE OR LUIGI. DESIGN MARKETING DATA INFRASTRUCTURE BASED ON THE INFORMATION ARCHITECTURE OF COMPANY’S WEBSITE BASED ON EXPERIENCE WITH SPARK AND HIVE. IMPROVE DATA BY ADDING NEW SOURCES, CODING BUSINESS RULES, AND PRODUCING NEW METRICS THAT SUPPORT THE BUSINESS BASED ON TEST AUTOMATION AND CONTINUOUS DELIVERY WHILE ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES. UTILIZE KNOWLEDGE OF SQL, QUERY TUNING, SCHEMA DESIGN, ETL PROCESSES, TEST AUTOMATION, CONTINUOUS DELIVERY, CONTINUOUS INTEGRATION, AND SOURCE CONTROL SYSTEM SUCH AS GIT. POSSESS SOLID UNDERSTANDING EXPERIENCE IN BUILDING RESTFUL APIS AND MICROSERVICES, E.G. WITH FLASK.

MINIMUM REQUIREMENTS: 

MASTER’S DEGREE IN COMPUTER SCIENCE OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE IN DATA ENGINEERING WITH TEST AUTOMATION AND CONTINUOUS DELIVERY, ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES, SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA, LAMBDA ARCHITECTURE OR OTHER BIG DATA ARCHITECTURAL BEST PRACTICES AND WITH SPARK AND HIVE.

ALTERNATE REQUIREMENTS: 

BACHELOR’S DEGREE IN COMPUTER SCIENCE OR RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF PROGRESSIVE EXPERIENCE IN DATA ENGINEERING WITH TEST AUTOMATION AND CONTINUOUS DELIVERY, ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES, SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA, LAMBDA ARCHITECTURE OR OTHER BIG DATA ARCHITECTURAL BEST PRACTICES AND WITH SPARK AND HIVE.

TRAVEL REQUIREMENTS: 

UP TO 10% DOMESTIC TRAVELS
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Share this job:
Software Engineer - Data Platform
 
big data cloud May 11
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Are you passionate about data platforms and tools? Are you a forward-thinking, structured problem solver who is passionate about building systems at scale? Do you understand data tools, know how to use them, and want to help our users to make data actionable? If so, this role with our team at Atlassian is for you.

We are looking for a Software Engineer to join our Data Platform Team and build a world-class data solution that powers crucial business decisions throughout the organization.

You will partner with analytical teams, data engineers and data scientists across various initiatives working with them to understand the gaps, and bring your findings back to the team to work on building these capabilities.
In this role, you will be part of the Discovery and Consumption team under the Data Platform. The team focuses on improving the discoverability and trust of data. We are building frictionless data experiences to all Atlassian employees by offering different services that help to generate impactful insights, such as the Atlassian data portal, data quality framework, metrics store, and much more.

More about you
You have proven experience working with big data ecosystems (AWS is an advantage). You’ve probably been in the industry as an engineer for 2+ years and have developed a passion for the data that drives businesses. You've got industry experience working with large datasets, and you're interested in self-serve analytics platforms and tools.

On your first day, we'll expect you to have:

Deep understanding of big data challenges
Built solutions using public cloud offerings such as Amazon Web Services
Experience with Big Data processing and storage technologies such as Spark, S3, Druid.
SQL knowledge
Solid understanding and experience in building RESTful APIs and microservices, e.g. with Flask
Experience with test automation and ensuring data quality across multiple datasets used for analytical purposes
Experience with continuous delivery, continuous integration, and source control system such as Git
Experience with Python
Degree in Computer Science, EE, or related STEM field
It's great, but not required if you have:
Experience with Databricks
Experience with React

More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into the data platform, and we have dozens of teams driving their decisions and guiding their operations based on the data and services we provide.
It’s our team's job to make more Atlassian’s data-informed and facilitate R&D. We do this by providing an ambitious data platform, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team that is crazy smart and very direct. We ask hard questions and challenge each other to constantly improve our work. We're all about enabling growth by delivering the right data and insights in the right way to partners across the company.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Cloud Partner Solutions Engineer/Evangelist - AWS/GCP
cloud aws java big data linux May 05
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Confluent’s Business Development team is the voice of the company to our partners and the voice of our partners to our internal product and engineering teams. For our Cloud Partner Solutions Architect, we’re looking for a strong technologist who will grow and lead the technical relationship with our Cloud Partners.   You’ll jointly build enterprise streaming solutions that highlight Confluent’s unique features, enable cloud technical sellers and be the technical face of Confluent to AWS or GCP.

Successful Cloud Partner Solutions Architects typically have backgrounds as developers, systems engineers, or product specialists, but they all share a passion for expanding Confluent’s partner ecosystem and delivering the best of that world to our customers.

Resposibilites

  • Work with AWS/GCP  to build differentiated solutions and offerings that include Confluent Technology
  • Build and manage relationships with Key Technical leaders at AWS/GCP
  • Provide architecture guidance and recommendations across solutions, offerings and customer opportunities, including by understanding how to optimise for economic impact as well as performance.
  • Educate and enable Cloud partner Architects on Confluent products 
  • Serve as a subject matter expert to guide technology strategy and influence product direction by working across Product Management, Engineering, Sales, Marketing, etc
  • Participate in Webinars and public speaking
  • Author whitepapers, technical articles and blog posts
  • Create content, organize and deliver  technical workshops to enable and educate partners

Requirements

  • 10+ years working in partner or customer facing engineering roles
  • Deep knowledge of AWS/GCP strategy, products, organizational and operating models
  • Bachelor’s degree in Computer Science, a related field or equivalent practical experience
  • Demonstrated experience architecting enterprise solutions for customers and partners on AWS/GCP
  • Experience with messaging, streaming and ETL products commonly used in the enterprise
  • Experience authoring, presenting and delivering technical material
  • Experience operating within and across cross-functional teams including product managements, engineering, sales, marketing, etc
  • Familiarity with Linux, Java and software design principles
  • Excellent verbal and written communication skill, with focus on identifying shared business value around complex software solutions
  • Ability to quickly learn, understand and work with new and emerging technologies, methodologies and solutions
  • Passion for the role and strong commitment to excellence

What gives you and edge

  • Knowledge of Apache Kafka and/or other streaming technologies
  • Experience serving as Technical Sales/Systems Engineer in a cloud environment or equivalent experience in a customer and/or partner-facing role.
  • Experience designing and building big data, stream processing and/or other distributed systems for Fortune 1000 companies
  • Experience working with global teams

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Data Engineer
 
java python scala big data aws May 04
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.

On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.

You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Presto, Spark, Airflow and Hive. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.

More about you
As a data engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.

On your first day, we'll expect you to have:

  • At least 3 years professional experience as a software engineer or data engineer
  • A BS in Computer Science or equivalent experience
  • Strong programming skills (some combination of Python, Java, and Scala preferred)
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience writing SQL, structuring data, and data storage practices
  • Experienced building data pipelines and micro services
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

It's preferred, but not technically required, that you have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • A passion for building and running continuous integration pipelines.
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide.

It’s the data engineering teams job to make more Atlassian’s data driven and facilitate growth. We do this by providing metrics and other data elements which are reliable and trustworthy, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team with a brand new mission, expanding into a new office. There will be plenty of challenges and scope to grow. We work very closely with Sales, Marketing and Commerce teams. We value when people ask hard questions and challenge each other to constantly improve our work. We are independent but love highly collaborative team environments, so you'll get the opportunity to work with lots of other awesome people just like you. We're all about enabling teams to execute growth and customer retention strategies by providing the right data fabrics and tools.

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:

This Year

Software Architect
Numbrs  
aws kubernetes docker java apache-kafka machine learning Apr 28

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will work in the Architecture team to support the Head of Technology in all the activities of the Technology department. You will be responsible and accountable for the oversight of all aspects of engineering operations, the architecture and design of Numbrs platform, and the delivery of services and solutions within Technology.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • a minimum of 5 years experience architecting, developing, evolving and troubleshooting large scale distributed systems
  • hands-on experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Java
  • knowledge of AWS, Kubernetes, and Docker
  • leadership experience
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication and interpersonal skills

Ideally, candidates will also have

  • experience with systems for automating deployment, scaling, and management of containerised applications, such as Kubernetes and Mesos
  • experience with machine learning and big data technologies, such as Kafka, Storm, Flink and Cassandra
  • experience with encryption and cryptography standards

Location: Remote

Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops sysadmin Apr 21

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • background in Linux administration (mainly Debian)
  • scripting/programming knowledge of at least Unix shell scripting
  • good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • quick to learn and fast to adapt to changing environments
  • excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards
Share this job:
Confluent Kafka Production Engineer
python big data Apr 21
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

About the Team:

The next big goal for the company is to make it as easy as possible for anyone in the world to use Confluent’s products to build their next killer streaming application. To do that we need to offer Confluent’s products as a Platform as a Service (PaaS). In order for this product to be successful, we absolutely have to bring in world-class talent that is passionate about running large scale, multi-tenant distributed data systems for customers who expect a very high level of availability.

About the Role:

A KPE is a key member of the Kafka team at Confluent. You work closely with the team and other Confluent engineers to continuously build out and improve our PaaS offering. You will be part of the team responsible for key operational aspects (availability, reliability, performance, monitoring, emergency response, capacity planning, disaster recovery) of our Kafka systems in production. If you love the hum of big data systems, think about how to make them run as smoothly as possible, and want to have a big influence on the architecture plus operational design points of this new product, then you will fit right in.

Who You Are:

  • Smart, humble, and empathetic
  • Have a strong sense of teamwork and put team’s and company’s interests first
  • Driven and excited about the challenges of a fast-paced, innovative software startup environment

What We're Looking For:

  • Strong fundamentals in distributed systems design and operations
  • Familiarity with Kafka or similar high-scale distributed data systems
  • Experience building automation to operate large-scale data systems
  • Solid experience working with large private or public clouds
  • A self starter with the ability to work effectively in teams
  • Excellent spoken / written communication
  • Preferred proficiency with Python/Java, shell scripting, system diagnostic and automation tooling
  • Bachelor's degree in Computer Science or similar field or equivalent

What Gives You An Edge:

  • Experience operating Kafka at scale is a big plus
  • Experience working with JVMs a plus
  • Experience with systems performance a plus
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Commercial Sales Engineer Intern
big data cloud Apr 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Join us as we pursue our mission of putting an event streaming platform at the heart of every business. We are a company filled with people who are passionate about our product and seek to deliver the best experience for our customers. At Confluent, we’re committed to our work, customers, having fun and most importantly to each other’s success. Learn more about Confluent careers and how you can become a part of our journey!

What is a Sales Engineer?

Confluent Pre-Sales Engineers drive the technical evaluation stage of the overall sales process, making them critical drivers of customer success as real time data streams become more and more important in the modern enterprise. In this role you will be the key technical advisor to the sales team, work very closely with the product management and engineering teams, and serve as a vital product advocate in front of prospects, customers, and the wider Kafka and Big Data communities.

As a Sales Engineer, you’ll combine your technical aptitude, exceptional communication skills, and creative problem solving to drive product adoption and success. Sales Engineers work alongside software engineers, product managers, and the sales team to engage with our customers in order to solve their most challenging problems using Confluent. Sales Engineers work with clients to understand real-world business problems and solve them by building & architecting technology.

What you'll work on:

  • Develop a project that will contribute to real-life improvements with high impact.
  • Evangelize our product and services to customers.
  • Collaborate with various Confluent teams.
  • Receive online/classroom training on the Confluent platform.

What we're looking for:

  • You are entering your final year in a BS degree program in computer science, engineering or a related discipline.
  • You have strong written and communication skills. 
  • You’re interested in, and/or have domain expertise in cloud technologies and IaaS platforms (AWS, GCP, Azure), data streaming technologies, or other integration platforms.
  • You have the ability to explain technical concepts to a wide range of audiences, you’re passionate about learning, and thrive under pressure.
Come join us if you’re looking for an internship that will allow you to use your technical skills, business acumen and entrepreneurial instincts! We’re looking for Sales Engineering Interns that can be a connector between people, technology, and business.

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Solutions Architect - Australia
java big data Apr 14
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Solutions Architects drive customer success by helping them realise business value from the burgeoning flow of realtime data streams in their organisations. In this role you’ll interact directly with our customers to provide expert consultancy, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases. Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art data infrastructure alongside colleagues who are widely recognised as leaders in this space.

The role requires travel across APAC, to work on-site with our customers in the region. You'll be based in Australia with the ability to travel to client engagements as required.

What we're looking for:

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Exceptional interpersonal communications capabilities demonstrated through a history of successful B2B infrastructure software development projects
  • Strong desire to tackle hard technical problems and proven ability do so with little or no direct daily supervision
  • Bachelor’s level degree in Computer Science or an engineering, mathematics, or other quantitative field
  • Proficiency in Java or Python
  • A minimum of 5 year's experience in a Professional Services role
  • Prior experience of regular business travel around the region
  • Ability to travel up to 50% of your time to client engagements

What gives you an edge:

  • Previous experience building solutions that use Apache Kafka alongside Hadoop, relational and NoSQL databases, message queues, and related products
  • Solid understanding of basic systems operations (disk, network, operating systems, etc)
  • Experience building and operating large-scale systems
  • Any other languages such as Cantonese, Mandarin
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Growing FinTech company looking to hire a fully remote Senior DevOps
aws kubernetes docker terraform linux devops Apr 11

We are experiencing strong demand for our e-commerce payment service and are looking for a number of highly skilled individuals to join our DevOps team.  Please only apply if you're located within +/- 1 hour from CEST/CET.

We are constantly developing and always striving to improve our software solutions, automating as many processes as possible. You will work both independently and as part of a dedicated DevOps team of 3 colleagues from all around Europe.   At the moment we have some services in AWS and a big part at a local hosting partner. One of the tasks will be to change this distribution together with the team. Furthermore we're entering new markets this year, which also requires further enhancements of our current setup and passing compliance audits.

Expectations: You will be working in a fast-paced environment where changes are normal. You must be able to keep your head cool in a hectic and busy environment. You have a high degree of independence and it is important that you are able to manage several tasks at the same time - also when the deadline is short.  

We are looking for talents with:

  • Experience as a Linux technical specialist
  • Experience with AWS services: 1.EC2, EKS, RDS (MariaDB/MySQL), DynamoDB, Networking 2. AWS Big Data analytics services (Athena, S3, Glue, Redshift, etc)
  • Hands-on experience with Kubernetes
  • Experience in configuration management tools (Ansible, Terraform are preferable)
  • Maintenance of monitoring tools (InfluxDB/Graphite/Prometheus + Grafana)
  • Experience with migrations to AWS
  • Experience with the microservices in the cloud
  • Understanding of cloud networking principles
  • Experience with CI/CD pipelines (GitLab)
  • Administration of Java and Spring Boot applications
  • Familiarity with messaging systems (ActiveMQ, Camel, Kafka)
  • Good scripting skills (at least 1 language)
  • Eye for clean code
  • Experience with compliance processes like ISO27001 and PCI DSS

Our technology stack:

  • Docker
  • Kubernetes(EKS)
  • Terraform
  • AWS
  • Ansible
  • Grafana
  • Prometheus
  • GitLab
  • Kafka
  • ApacheMQ

Some of the upcoming tasks will be:

  • Take part in dockerization process of Spring Boot applications
  • Organize container orchestration with Kubernetes
  • Refactor our constantly changing code base
  • Implement best practices for our daily infrastructure operations
  • Align our infrastructure with compliance requirements
  • Manage CI/CD processes with team
  • Setup and maintain new environments in AWS
  • Improve and automate infrastructure development
  • Monitor metrics and develop ways to improve
  • Work closely with BI team to provide AWS analytics platform

Requirements:

  • You probably have a background as B.S. or M.Sc in computer science or similar
  • You have experience with highly automated systems
  • You are able to see solutions from the perspective of the end-user
  • You speak and write English fluently

About our team: We are a team of highly motivated developers who work remotely from our own offices. We collaborate much like open-source projects with core maintainers for our services. Each developer has a lot of freedom working in a flat hierarchy in a very streamlined process where the domain experts are easily available on Slack or via Hangout.   We work with a very rapid release schedule, often releasing multiple times per day. This gives us a quick and motivating feedback loop. This also makes it very easy for a developer to see their effect on business!  This allows us to experiment and adopt new trends/frameworks quickly.  

Share this job:
Big Data Engineer
big data python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Big Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote)

You will:

  • Write jobs using PySpark to process billions of events per day
  • Fine tune existing Hadoop / Spark clusters
  • Rewrite some existing PIG jobs in PySpark

Key Qualifications

You have:

  • BS degree in Computer Science or related field
  • 7+ years of relevant work experience
  • Experience in building data pipelines at scale (Note: We process over 1 Trillion events per week)
  • Good knowledge of Hadoop / Spark /Apache Kafka, Python, AWS, PySpark and other tools in the Big Data ecosystem
  • Good programming skills – Python
  • Operation experience in the tuning of clusters for optimal data processing
  • Experience in building out ETL jobs at scale
  • Good knowledge of distributed system design and associated tradeoffs
  • Good knowledge of CI / CD and associated best practices
  • Familiarity with Docker-based development and orchestration

Bonus points awarded if you have:

  • Created automated / scalable infrastructure and pipelines for teams in the past
  • Contributed to the open source community (GitHub, Stack Overflow, blogging)
  • Prior experience with Spinnaker, Relational DBs, or KV Stores
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Federal Solutions Architect - Secret Clearance
java python scala big data linux cloud Apr 06
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 50% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • TS/SCI clearance required

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Software Engineer
 
senior golang java python javascript c Apr 02
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling innovative and exciting challenges head-on with a team that prioritizes honesty, transparency, and humility. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing. You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry. At Couchbase, you can go home knowing that you have an impact. What we do matters. Enjoy the journey.

This role is also open to remote work within US as our teams are distributed all over the world!

What are we up to?
If you like working on high performance enterprise software, then you’ll like this! As a Senior Software Engineer in our R&D team, you will help build cutting-edge distributed, shared-nothing architecture of our NoSQL software. You will be part of a team creating NoSQL database products used by hundreds of modern enterprises and applications, tackling the hardest problems our customers have, and employing your skills in - Golang, Java, C/C++, Erlang, JavaScript and/or Python (with possibly a few other languages thrown in for good measure). This is a very exciting time to build something new and innovative for the databases of the future. Like open source? So do we: Couchbase and its engineers are active open source contributors for Couchbase, memcached, and other projects.
While other NoSQL vendors may experience architectural limitations, Couchbase architects all of its own systems with a laser-focus on high performance use cases for largest modern enterprise applications. Your engineering contributions will be the key in helping Couchbase to keep our performance advantage over the competition.

RESPONSIBILITIES - You will
Translate product requirements into engineering requirements and write high quality performing code.
Design and implement mission-critical code coverage as it pertains to the data model for a scale-out database.
Debug and fix issues by participating in high quality code reviews
Align with Indexing, Support, Mobile, Search, Storage and Clustering teams to integrate new features into our data platform.
Engineer needle-moving tools and features with simplicity, elegance and economy.
Be agile! Think quality! Think leverage!

PREFERRED QUALIFICATIONS
You are passionate about database architecture and systems
You can hack in several of your preferred languages from C/C++, Java, and Python to Erlang and Go
You have multiple years of commercial and/or open source software experience
You think that distributed systems are amazing
You’re self-motivated, independent and a quick learning person who likes to take on challenges 
You like working in organizations that strive to stay ahead of the curve by rapidly moving the technological innovation.

MINIMUM QUALIFICATIONS
Masters Degree in Computer Science or commensurate experience 
You’re an excellent teammate
Excellent written and verbal communication skills
About Couchbase

Unlike other NoSQL databases, Couchbase provides an enterprise-class, multicloud to edge database that offers the robust capabilities required for business-critical applications on a highly scalable and available platform. Couchbase is built on open standards, combining the best of NoSQL with the power and familiarity of SQL, to simplify the transition from mainframe and relational databases.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace and is dedicated to pursuing, hiring and developing a diverse workforce. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Backend Engineer Data Team
aws java apache-spark hadoop hbase backend Mar 26

Sonatype’s mission is to enable organizations to better manage their software supply chain.  We offer a series of products and services including the Nexus Repository Manager and Nexus Lifecycle Manager. We are a remote and talented product development group, and we work in small autonomous teams to create high-quality products. Thousands of organizations and millions of developers use our software. If you have a passion for challenging problems, software craftsmanship and having an impact, then Sonatype is the right place for you. We are expanding our Data team, responsible for unlocking insight from vast amounts of software component data, powering our suite of products enabling our customers from making informed and automated decisions in managing their software supply chain. As a Backend Engineer, you will lead or contribute to designing, development, and monitoring of systems and solutions for collecting, storing, processing, and analyzing large data sets.  You will work in a team made up of Data Scientists and other Software Engineers. No one is going to tell you when to get up in the morning, or dole out a bunch of small tasks for you to do every single day. Members of Sonatype's Product organization have the internal drive and initiative to make the product vision a reality. Flow should be the predominate state of mind.

Requirements:

  • Deep software engineering experience; we primarily use Java.
  • Database and data manipulation skills working with relational or non-relational models.
  • Strong ability to select and integrate appropriate tools, frameworks, systems to build great solutions.
  • Deep curiosity for how things work and desire to make them better.
  • Legally authorized to work (without sponsorship) in Canada, Colombia, or the United States of America and are currently residing in the corresponding country.

Nice To Haves:

  • Degree in Computer Science, Engineering, or another quantitative field.
  • Knowledge and experience with non-relational databases (i.e., HBase, MongoDB, Cassandra).
  • Knowledge and experience with large-scale data tools and techniques (i.e., MapReduce, Hadoop, Hive, Spark).
  • Knowledge and experience with AWS Big Data services (i.e., EMR, ElasticSearch).
  • Experience working in a highly distributed environment, using modern collaboration tools to facilitate team communication.

What We Offer:

  • The opportunity to be part of an incredible, high-growth company, working on a team of experienced colleagues
  • Competitive salary package
  • Medical/Dental/Vision benefits
  • Business casual dress
  • Flexible work schedules that ensure time for you to be you
  • 2019 Best Places to Work Washington Post and Washingtonian
  • 2019 Wealthfront Top Career Launch Company
  • EY Entrepreneur of the Year 2019
  • Fast Company Top 50 Companies for Innovators
  • Glassdoor ranking of 4.9
  • Come see why we've won all of these awards
Share this job:
Senior Software Engineer, Backend
Numbrs  
java backend microservices kubernetes machine-learning senior Mar 25

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

All candidates will have

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • experience with high volume production grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Data Engineer - Lead Subject Matter Expert
data science big data Mar 13
Role:

At Springboard, we are on a mission to bridge the skills gap by delivering high-quality, affordable education for new economy skills. We’ve already launched hundreds of our students into Data Science careers through our top-rated Data Science course that pairs students with an industry mentor and offers them a Job Guarantee.  

Now we’re expanding our Data Science course offerings, and we’re looking for an expert who has a strong background in Data Engineering to help us build a new Data Engineering course in the coming months. This is a unique opportunity to put your expertise into action to educate the next generation of Data Engineers and increase your domain mastery through teaching. 

The course will be an online 6-to-9 month program designed to help students find a job within 6 months of completion. You’ll set the vision to ensure we’re teaching all that is needed to succeed as a Data Engineer. Your work will include creating projects and other materials to define students’ learning experiences.  

This role will be a part-time contract role for a duration of 3~4 months (starting immediately) with potential for ongoing consulting work. We estimate a workload of roughly 15-20 hours/ week. You can work with us out of our office in San Francisco or remotely. This is a paid engagement. 

Responsibilities:

You’ll work with our curriculum development team to create a Data Engineering course.

As part of this role, you will

  • Set the vision for effectively teaching key data engineering concepts and skills
  • Define learning objectives and course structure (units and projects)
  • Collaborate with the instructional designers and other subject matter experts to build the full curriculum. This includes:
  • Designing, writing, and building course projects (and associated resources)
  • Curating high-quality resources (videos, articles) that effectively teach course topics 
  • Writing descriptions that summarize and explain the importance of each topic covered in the course
  • Create rubrics for mentors to evaluate student work (especially course projects)

Experience

  • Currently working as a Data Engineer in the U.S. for 3+ years including experience with data warehousing, ETL, big data systems, data modeling and schema design, and owning data quality.
  • 1+ years of  experience hiring and/or managing Data Engineers
  • Passion for teaching. Previous teaching experience is a huge bonus.

Skills

  • Understanding of Data Engineering landscape and how the field varies across companies
  • Ability to identify the tools and industry practices students need to learn to successfully become Data Engineers
  • Clear point-of-view on what skills are needed for an entry level Data Engineer role and how to teach them in a structured manner 
  • Proven ability to create projects with clear instructions and documentation
  • Excellent verbal & written communication skills

You are

  • Able to work independently and produce high-quality work without extensive supervision 
  • Diligent about meeting deadlines
  • A collaborator working efficiently with a diverse group of individuals
  • Receptive and responsive to feedback and are willing to iterate on work
  • Passionate about education

Availability

  • 15-20 hours of work/week for 3-4 months, starting immediately
  • Must be available to connect synchronously during PST working hours on weekdays
  • Can be remote or work from our SF office
Share this job:
Backend Engineer, Data Processing Rust
backend java data science machine learning big data linux Mar 13
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the Role

This is a fully remote role, we will consider applicants based in North America, South America and Europe

Our Engineering team is having a blast while delivering the most sophisticated crypto-trading platform out there. Help us continue to define and lead the industry.

As part of Kraken's Backend Data team, you will work within a world-class team of engineers building Kraken's infrastructure using Rust. As a Backend Engineer in Data Processing, you help design and build Fraud and Security detection systems leveraging Big data pipelines, Machine Learning and Rust.

Responsibilities:

  • Design and implementation of micro-services in Rust
  • Writing reusable, testable, and efficient code
  • Implementation of risk evaluation and anti-fraud systems, or similar scoring and anomaly detection systems
  • Pick and design adequate data processing storage and pipelines
  • Work with our Fraud/Data Science team or provide the Data Science know-how to support Product requirements

Requirements:

  • At least 5 years of experience in software engineering
  • Experience with Rust
  • Experience writing network services or asynchronous code
  • Python, Java or similar work experience
  • Working knowledge using Kafka, Pulsar or similar
  • Experience using a Linux server environment
  • Ability to independently debug problems involving the network and operating system

A strong candidate will also:

  • Be familiar with deployment using Docker
  • Have previous work experience on Risk scoring or anomaly detection systems
  • Have experience with Machine Learning and its ecosystem
  • Have experience with other strongly typed programming languages
  • Have experience using SQL and distributed data solutions like Spark, Hadoop or Druid
  • Be passionate about secure, reliable and fast software
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Full Stack Engineer - DSS
Dataiku  
full stack java python javascript scala big data Mar 13
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



As a full stack developer in the Dataiku engineering team, you will play a crucial role in helping us have a real impact on the daily life of data analysts and scientists. You will be joining one of 3 teams that develop new features and improve existing parts of  Data Science Studio (DSS) based on user feedback.

DSS is an on-premises application that connects together all big data technologies. We work with SQL databases, Spark, Kubernetes, Hadoop, Elasticsearch, MLlib, scikit-learn, Shiny, … and many more. Basically, our technological stack is made of all the technologies present in Technoslavia!

Our backend is mainly written in Java but also includes large chunks in Scala, Python and R. Our frontend is based on Angular and also makes vast usage of d3.js.

One of the most unique characteristics of DSS is the breadth of its scope and the fact that it caters both to data analysts (with visual and easy to use analytics) and data scientists (with deep integration in code and libraries, and a web-based IDE).

This is a full-time position, based in France either in our Paris office or remote.

Your missions

  • Turn ideas or simplistic specifications into full-fledged product features, including unit and end-to-end tests.
  • Tackle complex problems that range from performance and scalability to usability, so that complicated machineries look straightforward and simple to use for our users.
  • Help your coworkers: review code, spread your technical expertise, improve our tool chain
  • Bring your energy to the team!

You are the ideal recruit if

  • You are mastering a programming language (Java, C#, Python, Javascript, You-name-it, ...).
  • You know that low-level Java code and slick web applications in Javascript are two sides of the same coin and are eager to use both.
  • You know that ACID is not a chemistry term.
  • You have a first experience (either professional or personal) building a real product or working with big data or cloud technologies.

Hiring process

  • Initial call with the talent acquisition manager
  • On-site meeting (or video call) with the hiring manager
  • Home test to show your skills
  • Final on-site interviews


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops machine learning Mar 11

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

All candidates will have

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • Background in Linux administration (mainly Debian)
  • Scripting/programming knowledge of at least Unix shell scripting
  • Good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • Good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • Understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • Quick to learn and fast to adapt to changing environments
  • Excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • Excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards

Location: Zurich, Switzerland

Share this job:
Director of Sales Engineering - Central Europe
Dataiku  
executive python machine learning big data Mar 11
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is hiring a Director of Sales Engineering to oversee our Central Europe team of Sales Engineers. The position should be based in Frankfurt, Berlin or Munich. 

The Sales Engineering function at Dataiku is the primary technical function within the Sales organization, providing technical support (both for presales and post sales) to the Account Executives and directly contributing to Dataiku’s revenue objectives. As the “trusted advisors” in the sales process, the Sales Engineers help to build interest in Dataiku, build the solution to the prospect’s needs, and then build the evaluation to provide the prospect/customer with the proof that they need to make their purchasing decision. 

The Director role is key in growing Dataiku’s business in Central Europe ; s.he should work as an individual contributor and lead the team. S.he will support objectives related to our ability to deliver compelling, highly technical customer engagements in the field. Key responsibilities in the coming months will be the enablement of the existing team, the hiring and retaining of top talents and ensuring excellence in our execution. 

You’ll report directly to the Regional Vice President of Sales Engineering  for EMEA.

RESPONSIBILITIES:

  • Lead a team of Sales Engineers helping to ensure technical success throughout the sales process
  • Be the main technical point of contact for the VP sales Central Europe to strategize on opportunities, give reliable visibility to the pipe, and train / coach the sales team on technical topics
  • Mentor/coach team members during on-boarding and subsequent phases to ensure proper ramping of skills and capabilities
  • Mentor / coach team members on a day to day basis: brainstorm on the strategy to adopt for each opportunity, provide constructive feedbacksInteract with customers and prospects to understand their business challenges and engage in evaluation process
  • Build strong working relationships with cross functional teams to ensure alignment between pre and post sales activities
  • Work with cross functional teams, product management, R&D, and other organizations to ensure alignment, provide process and product feedback, and resolve critical customer situations

REQUIREMENTS

  • 5+ year’s experience in sales engineering of enterprise software products, big data tech experience preferred
  • 2+ year’s related Sales Engineering management experience is preferredExperience in complex / large-scale enterprise analytics deploymentFamiliarity with Python and/or R
  • Experience in data storage and computing infrastructure for data of all sizes (SQL, NoSQL, Hadoop, Spark, on-premise, and cloud)
  • Knowledge of machine learning libraries and techniques
  • Experience with visualization and dashboarding solutions
  • Excellent communication and public speaking skills
  • Native level in German and good communication skills in English
  • Ability to travel 10 to 40%
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

Responsibilities

  • Providing technical solutions and responding to technical requests from customers through our different channels: mail, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customersImprove efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams

Requirements

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational database and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface both with technical and non-technical individuals as needed

Nice to haves...

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning

Benefits

  • Opportunity to join Dataiku early on and help scale the company
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to Paris (our European HQ)
  • Opportunity to work with a smart, passionate and driven team
  • Dataiku has a strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

We are looking for someone in the US to help with providing world-class support to our Federal customer base. In particular, this position will require the individual to be either a US citizen or qualified green card holder. Clearance is not necessary but would be a plus.

Responsibilities:

  • Providing technical solutions and responding to technical requests from customers through our different channels: email, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customers
  • Improve efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices 
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams
  • Providing support to some of our largest, most challenging Federal and Enterprise accounts

Requirements:

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational databases and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface with both technical and non-technical individuals as needed
  • US citizen or green card holder

Bonus Points:

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning
  • Experience or proven track record working with Federal clients

Benefits:

  • Opportunity to join Dataiku at an early stage and help scale the Support organization
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to our different offices (Paris, NYC, etc.)
  • Opportunity to work with a smart, passionate, and driven team
  • Startup atmosphere: Free foods and drinks, foosball/FIFA/ping pong, company happy hours and team days, and more
  • Strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Enterprise Account Executive - Financial Services
executive c saas big data Feb 25
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Enterprise Account Executives play a key role in driving Confluent’s sales activities in region. This role includes activities developing and executing on the go-to-market strategy for your territory. The ideal candidate needs to have experience selling complex Database, Messaging, Big Data, Open Source and/or SaaS into large corporate and multi national companies.

What you will do:

  • Build awareness for Kafka and the Confluent Platform within large enterprises
  • Aggressively prospect, identify, qualify and develop sales pipeline
  • Close business to exceed monthly, quarterly and annual bookings objectives
  • Build strong and effective relationships, resulting in growth opportunities
  • Build and maintain relationships with new and existing Confluent partners

What we are looking for:

  • An ability to articulate and sell the business value of big data and the impact on businesses of all sizes
  • Deep experience selling within the Database, Open Source, Messaging or Big Data space
  • 5+ years experience selling enterprise technology in a fast-paced and competitive marketExperience selling to developers and C level executives
  • Highly motivated, over achiever, team player
  • Strong analytical and writing abilities
  • Exceptional presentation skills
  • Entrepreneurial spirit/mindset, flexibility toward dynamic change
  • Goal oriented, with a track record of overachievement (President’s Club, Rep of the Year, etc.)

Why you will enjoy working here:

  • We’re solving hard problems that are relevant in every industry
  • Your growth is important to us, we want you to thrive here
  • You will be challenged on a daily basis
  • We’re a company that truly values a #oneteam mindset
  • We have great benefits to support you AND your family
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

#LI-NF1

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Project Management Curriculum Writer
project-management agile kanban data science big data cloud Feb 22

Project Management Curriculum Writer

  • Education
  • Remote
  • Contract

Who We Are Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. We provide 1-on-1 learning through our network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development, data science, and design, with in-person communities in up-and-coming tech hubs around the U.S. To join the Thinkful network visit thinkful.com.

Job Description Thinkful is launching a new Technical Project Management program which aims to be the best in-class remote, part-time Technical Project Management program offered today. As part of this effort, we're looking for a Technical Project Management subject matter expert to join us in executing on our content roadmap for this exciting new program. You will be creating the backbone of a new program that propels people from a background in academia and the sciences into an impactful career as Technical Project Manager. You'll produce written content, lesson plans including instructor notes and student activity descriptions, presentation decks, assessments, learning objectives and written content, all to support our students as they learn the core skills of data science. Your work product will be extremely impactful, as it forms the core asset around which the daily experience of our students will revolve. 

Responsibilities

  • Consistently deliver content that meets spec and is on time to support our program launch roadmap.
  • Create daily lesson plans consisting of 
  • Presentation decks that instructors use to lecture students on a given learning objective.
  • Instructor notes that instructors use alongside 
  • Activity descriptions — these are notes describing tasks students complete together in order to advance the learning objective in a given lecture.
  • Creates curriculum checkpoint content on specific learning objectives. In addition to the in-class experience, our students also spend time reading and completing tasks for a written curriculum hosted on the Thinkful platform. 
  • Creates code assets where necessary to support lesson plans, student activities, and written curriculum content.
  • Iterates on deliverables based on user feedback

Requirements

  • 3+ years of hands-on Technical Project Management industry experience 
  • Demonstrated subject matter expert in Technical Project Management 
  • Managing projects using Agile, Kanban and six Sigma methodologies
  • Work on multiple projects, all complexity levels, in an environment with changing priorities
  • Change management expertise 
  • Web application development experience 
  • Running large scale big data projects and or AWS cloud based projects
  • Collaborative.You enjoy partnering with people and have excellent project management skills and follow through
  • Excellent writing skills. You've got a gift for writing about complicated concepts in a beginner-friendly way. You can produce high-quality prose as well as high-quality presentations.

Compensation and Benefit

  • Contract position with a collaborative team
  • Ability to work remotely with flexible hours 
  • Access to all available course curriculum for personal use
  • Membership to a global community of over 500 Software Engineers, Developers, and Data Scientists who, like you, want to keep their skills sharp and help learners break into the industry
Share this job:
Big Data ETL, Architecture
amazon-redshift amazon-redshift-spectrum postgis amazon-s3 data-structures big data Feb 20

Lean Media is looking for experts to help us with the ongoing import, enrichment (including geospatial), and architecture of big datasets (millions to billions of records at a time).

Our infrastructure and tech stack includes:

  • Amazon Redshift, Spectrum, Athena
  • AWS Lambda
  • AWS S3 Data Lakes
  • PostgreSQL, PostGIS
  • Apache Superset

We are looking for expertise in:

  • Building efficient ETL pipelines, including enrichments
  • Best practices regarding the ongoing ingestion of big datasets from disparate sources
  • High performance enrichment of geospatial data
  • Optimizing data structures as they relate to achieving performant queries via analytics tools
  • Architecting a sustainable data infrastructure supporting all of the above

While this posting is for a contract position, we are open to short projects, ongoing engagements, and even full time employment opportunities. If you have a high degree of skill and experience in the area of big data architecture in AWS, then please let us know!

Share this job:
Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Cloud Architect for Enterprise AI - Remote
Dataiku  
cloud data science big data linux aws azure Feb 18
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Enterprise AI Platform (Dataiku DSS)  to an ever growing customer base. 

As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build and run their Data Science and AI Enterprise Platforms.

This role requires adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines,  and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.

The position can be based remotely.

Responsibilities

  • Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
  • Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
  • Deploy Dataiku DSS in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
  • Design and build reference architectures, howtos, scripts and various helpers  to make the deployment and maintenance of Dataiku DSS smooth and easy
  • Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
  • Provide advanced support for strategic customers on deployment and scalability issues
  • Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
  • Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform

Requirements

  • Strong Linux system administration experience
  • Grit when faced with technical issues. You don’t rest until you understand why it does not work.
  • Comfort and confidence in client-facing interactions
  • Ability to work both pre and post sale
  • Experience with cloud based services like AWS, Azure and GCP
  • Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
  • Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
  • Some experience with Python
  • Familiarity with Ansible or other application deployment tools

Bonus points for any of these

  • Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
  • Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
  • Some knowledge in data science and/or machine learning
  • Some knowledge of Java

Benefits

  • Work on the newest, best, big data technologies for a unicorn startup
  • Consult on AI infrastructure for some of the largest companies in the world
  • Equity
  • Opportunity for international exchange to another Dataiku office
  • Attend and present at big data conferences
  • Startup atmosphere: Free foods and drinks, international atmosphere, general good times and friendly people


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Site Reliability Engineer
hadoop linux bigdata python ruby c Feb 14

The Wikimedia Foundation is hiring two Site Reliability Engineers to support and maintain (1) the data and statistics infrastructure that powers a big part of decision making in the Foundation and in the Wiki community, and (2) the search infrastructure that underpins all search on Wikipedia and its sister projects. This includes everything from eliminating boring things from your daily workflow by automating them, to upgrading a multi-petabyte Hadoop or multi-terabyte Search cluster to the next upstream version without impacting uptime and users.

We're looking for an experienced candidate who's excited about working with big data systems. Ideally you will already have some experience working with software like Hadoop, Kafka, ElasticSearch, Spark and other members of the distributed computing world. Since you'll be joining an existing team of SREs you'll have plenty of space and opportunities to get familiar with our tech (AnalyticsSearchWDQS), so there's no need to immediately have the answer to every question.

We are a full-time distributed team with no one working out of the actual Wikimedia office, so we are all together in the same remote boat. Part of the team is in Europe and part in the United States. We see each other in person two or three times a year, either during one of our off-sites (most recently in Europe), the Wikimedia All Hands (once a year), or Wikimania, the annual international conference for the Wiki community.

Here are some examples of projects we've been tackling lately that you might be involved with:

  •  Integrating an open-source GPU software platform like AMD ROCm in Hadoop and in the Tensorflow-related ecosystem
  •  Improving the security of our data by adding Kerberos authentication to the analytics Hadoop cluster and its satellite systems
  •  Scaling the Wikidata query service, a semantic query endpoint for graph databases
  •  Building the Foundation's new event data platform infrastructure
  •  Implementing alarms that alert the team of possible data loss or data corruption
  •  Building a new and improved Jupyter notebooks ecosystem for the Foundation and the community to use
  •  Building and deploying services in Kubernetes with Helm
  •  Upgrading the cluster to Hadoop 3
  •  Replacing Oozie by Airflow as a workflow scheduler

And these are our more formal requirements:

  •    Couple years experience in an SRE/Operations/DevOps role as part of a team
  •    Experience in supporting complex web applications running highly available and high traffic infrastructure based on Linux
  •    Comfortable with configuration management and orchestration tools (Puppet, Ansible, Chef, SaltStack, etc.), and modern observability       infrastructure (monitoring, metrics and logging)
  •    An appetite for the automation and streamlining of tasks
  •    Willingness to work with JVM-based systems  
  •    Comfortable with shell and scripting languages used in an SRE/Operations engineering context (e.g. Python, Go, Bash, Ruby, etc.)
  •    Good understanding of Linux/Unix fundamentals and debugging skills
  •    Strong English language skills and ability to work independently, as an effective part of a globally distributed team
  •    B.S. or M.S. in Computer Science, related field or equivalent in related work experience. Do not feel you need a degree to apply; we value hands-on experience most of all.

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible international workers' benefits are specific to their location and dependent on their employer of record

Share this job:
Data Engineer
NAVIS  
hadoop web-services python sql etl machine learning Feb 11

NAVIS is excited to be hiring a Data Engineer for a remote, US-based positionCandidates based outside of the US are not being considered at this time.  This is a NEW position due to growth in this area. 

Be a critical element of what sets NAVIS apart from everyone else!  Join the power behind the best-in-class Hospitality CRM software and services that unifies hotel reservations and marketing teams around their guest data to drive more bookings and revenue.

Our Guest Experience Platform team is seeking an experienced Data Engineer to play a lead role in the building and running of our modern big data and machine learning platform that powers our products and services. In this role, you will responsible for building the analytical data pipeline, data lake, and real-time data streaming services.  You should be passionate about technology and complex big data business challenges.

You can have a huge impact on everything from the functionality we deliver for our clients, to the architecture of our systems, to the technologies that we are adopting. 

You should be highly curious with a passion for building things!

Click here for a peek inside our Engineering Team


DUTIES & RESPONSIBILITIES:

  • Design and develop business-critical data pipelines and related back-end services
  • Identification of and participation in simplifying and addressing scalability issues for enterprise level data pipeline
  • Design and build big data infrastructure to support our data lake

QUALIFICATIONS:

  • 2+ years of extensive experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, HBase, Parquet)
  • Experience with building, breaking, and fixing production data pipelines
  • Hands-on SQL skills and background in other data stores like SQL-Server, Postgres, and MongoDB
  • Experience with continuous delivery and automated deployments (Terraform)
  • ETL experience
  • Able to identify and participate in addressing scalability issues for enterprise level data
  • Python programming experience

DESIRED, BUT NOT REQUIRED SKILLS:

  • Experience with machine learning libraries like scikit-learn, Tensorflow, etc., or an interest in picking it up
  • Experience with R to mine structured and unstructured data and/or building statistical models
  • Experience with Elasticsearch
  • Experience with AWS services like Glue, S3, SQS, Lambda, Fargate, EC2, Athena, Kinesis, Step Functions, DynamoDB, CloudFormation and CloudWatch will be a huge plus

POSITION LOCATION:

There are 3 options for the location of this position (candidates based outside the US are NOT being considered at this time):

  • You can work remotely in the continental US with occasional travel to Bend, Oregon
  • You can be based at a shared office space in the heart of downtown Portland, Oregon
  • You can be based at our offices in Bend, Oregon (relocation assistance package available)

Check out this video to learn more about the Tech scene in Bend, Oregon


NAVIS OFFERS:

  • An inclusive, fun, values-driven company culture – we’ve won awards for it
  • A growing tech company in Bend, Oregon
  • Work / Life balance - what a concept!
  • Excellent benefits package with a Medical Expense Reimbursement Program that helps keep our medical deductibles LOW for our Team Members
  • 401(k) with generous matching component
  • Generous time off plus a VTO day to use working at your favorite charity
  • Competitive pay + annual bonus program
  • FREE TURKEYS (or pies) for every Team Member for Thanksgiving (hey, it's a tradition around here)
  • Your work makes a difference here, and we make a huge impact to our clients’ profits
  • Transparency – regular All-Team meetings, so you can stay in-the-know with what’s going on in all areas our business
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Senior Data Engineer
Acast  
senior java scala big data docker cloud Feb 10
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

We are looking for a Senior Data Engineer to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as core dataset for business critical use-cases such as payouts to our podcasters. This team’s ambition is to transform our data into insights. The products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 

In this role you will work with other engineers, product owners within a cross functional agile team.

You

  • 3+ years of experience of building robust big data ETL pipelines within Hadoop Ecosystem: Spark, Hive, Presto, etc
  • Are proficient in Java or Scala and Python
  • Experience with AWS cloud environment: EMR, Glue, Kinesis, Athena, DynamoDB, Lambda, Redshift, etc.
  • Have strong knowledge in SQL, NoSQL database design and modelling, and knowing the differences on modern big data systems and traditional data warehousing
  • DevOps and infrastructure as code experience (a plus), familiar with tools like Jenkins, Ansible, Docker, Kubernetes, Cloudformation, Terraform etc
  • Advocate agile software development practices and balance trade-offs in time, scope and quality
  • Are curious and a fast learner who can adapt quickly and enjoy a dynamic and ever-changing environment

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our engineering team is mostly located in central Stockholm, but with a remote first culture we’re able to bring on people who prefer full time remote work from Sweden, Norway, UK, France and Germany.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Solutions Architect - Pacific Northwest
java python scala big data linux cloud Feb 07
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 60 -70% travel expected.
Anywhere in Pacific NorthWest

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:
Consulting Engineer
java python scala big data linux azure Jan 17
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Consulting Engineers drive customer success by helping them realize business value from the burgeoning flow of real-time data streams in their organizations. In this role you’ll interact directly with our customers to provide software, development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases.  

Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art streaming data infrastructure alongside colleagues who are widely recognized as leaders in this space.

Promoting Confluent and our amazing team to the community and wider public audience is something we invite all our employees to take part in.  This can be in the form of writing blog posts, speaking at meetups and well known industry events about use cases and best practices, or as simple as releasing code.

While Confluent is headquartered in Palo Alto, you can work remotely from any location on the East Coast of the United States as long as you are able to travel to client engagements as needed

A typical week at Confluent in this role may involve:

  • Preparing for an upcoming engagement, discussing the goals and expectations with the customer and preparing an agenda
  • Researching best practices or components required for the engagement
  • Delivering an engagement on-site, working with the customer’s architects and developers in a workshop environment
  • Producing and delivering the post-engagement report to the customer
  • Developing applications on Confluent Kafka Platform
  • Deploy, augment, upgrade Kafka clusters
  • Building tooling for another team and the wider company
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles
  • Honing your skills, building applications, or trying out new product features

Required Skills:

  • Deep experience building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field
  • Ability to travel up to 60-75% of your time to client engagements

Nice to have:

  • Experience using Amazon Web Services, Azure, and/or GCP for running high-throughput systems
  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Python, Scala, or Go
  • Experience with configuration and management tools such as Ansible, Teraform, Puppet, Chef
  • Experience writing to network-based APIs (preferably REST/JSON or XML/SOAP)
  • Knowledge of enterprise security practices and solutions, such as LDAP and/or Kerberos
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Back End DevOps Engineer
aws security kubernetes shell python devops Jan 16

As more companies adopt public cloud infrastructure and the increase sophistication and harm caused by cyber attacks, the ability to safeguard companies from these threats have never been more urgent.  

Lacework’s novel approach to security fundamentally converts cyber security into a big data problem.  They are a startup based in Silicon Valley that applies large scale data mining and machine learning to public cloud security.  Within a cloud environment (AWS, GCP, Azure), their technology captures all communication between processes/users/external machines and uses advanced data analytics and machine learning techniques to detect anomalies that indicate potential security threats and vulnerabilities. The company is led by an experienced team who have built large scale systems at Google, Paraccel (Amazon Redshift), Pure Storage, Oracle, and Juniper networks.  Lacework is well funded by a tier one VC firm and is based in San Jose, CA.

They are looking for a Senior DevOps engineer with strong AWS and Kubernetes experience who is excited about building an industry leading, next generation Cloud Security System.

You will be a part of the team that architects, designs, and implements highly scalable distributed systems that provide availability, scalability and performance guarantees. This is a unique and rare opportunity to get in on the ground floor and help shape their technologies, products and business.

Roles/Responsibilities

  • Assist in managing Technical Operations, Site Reliability, production operations and engineering environments 
  • Run production operations for their SaaS product
    • Manage the monitoring System
    • Debugging live production issues
    • Manage Software release roll-out
  • Use your engineering skills to promote platform scalability, reliability, manageability  and cost efficiency
  • Work with the engineering and QA teams to provide your valuable feedback about how to improve the product
  • Participate in on-call rotations (but there is really not a lot of work since you will automate everything!)

Requirements:

  • 4+ years of relevant experience (Technical Operation, SRE, System Administration)
  • AWS experience 
  • Experienced Scripting skills Shell and / or Python 
  • Eager to learn new technologies
  • Ability to define and follow procedures
  • Great communication skills
  • Computer Science degree 
Share this job:
Principal Product Manager - Couchbase Server, Analytics
 
manager product manager big data cloud Jan 15
Forbes listed Couchbase as one of the market’s next billion dollar 'Unicorns' and the Couchbase NoSQL data platform is widely used by leading enterprises to power their business-critical applications.
 
We are looking for a Principal Product Manager with a strong technical background in database systems – someone with product management experience and a good understanding of mission-critical transactional and analytical use cases
 
As the Principal Product Manager for Couchbase Analytics, you will define the product roadmap and requirements for our industry-leading NoSQL data platform. You will also work with our marketing team to position and generate awareness for our platform, and enable the field teams to help our customers successfully deploy our solutions.

Responsibilities

  • Drive Couchbase Analytics strategy and roadmap including recommendations on tools, vendors/partners, and technologies
  • Engage with customers to understand their use cases and requirements, influence and develop the product roadmap and identify high-value integrations with the broader analytics ecosystem
  • Work with the engineering teams to prioritize and drive feature specifications from concept to general availability
  • Work with internal functional groups (engineering, marketing, support, sales, etc.) as well as customers and partners to drive feature priorities, product releases and customer engagements
  • Contribute to internal and external product-related content like sales collateral, feature blogs and documentation
  • Define and track KPIs to measure success of product launches and new features

Requirements

  • 5+ years of experience in the information management industry
  • 5+ years highly-focused product management experience in a fast-paced technology company
  • BS or MS degree in Computer Science
  • Domain expert in databases is a must with deep knowledge of the database landscape
  • Experience with database and analytics systems, including BI and ML tools
  • Critical thinker with a strong bias towards action, able to go deep into technology and relate technical enhancements to customer use cases
  • Track and record of exceptional performance and work as a collaborative team player
  • Clear, crisp communicator with strong written and oral communication skills with superior presentation skills
  • Experience in a start-up environment a strong plus
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Big Data Engineer
Infiot  
java sql bigdata big data python linux Jan 08

We are looking for a Software Engineer to work with us primarily in the data ingestion pipelines and analytics databases of our cloud platform. The qualified candidate will join a team of full stack engineers who work in front end, back end and devops initiatives needed to accomplish the Infiot vision. 

The ideal candidate would have most of the following qualifications. We will consider candidates who have some of these qualifications but are interested in working on this skillset.

  • Experience with scalable cloud native multi-tenant architectures
  • Fluency in Java
  • Experience with HTTP based APIs (REST or GraphQL)
  • Experience in big data frameworks (Apache Beam, Spark, etc)
  • Experience in SQL and NoSQL databases with focus on scale
  • Some Linux command line, python and make
  • Ability to work in a team setting
  • Passion for automation
  • Passion for personal productivity improvement
  • Passion for quality and customer satisfaction
  • Passion for development driven testing
  • MS/PhD in Computer Science or equivalent knowledge/experience
Share this job:
Senior Software Engineer, Data Pipeline
java scala go elasticsearch apache-spark senior Dec 31 2019

About the Opportunity

The SecurityScorecard ratings platform helps enterprises across the globe manage the cyber security posture of their vendors. Our SaaS products have created a new category of enterprise software and our culture has helped us be recognized as one of the 10 hottest SaaS startups in NY for two years in a row. Our investors include both Sequoia and Google Ventures. We are scaling quickly but are ever mindful of our people and products as we grow.

As a Senior Software Engineer on the Data Pipeline Platform team, you will help us scale, support, and build the next-generation platform for our data pipelines. The team’s mission is to empower data scientists, software engineers, data engineers, and threat intelligence engineers accelerate the ingestion of new data sources and present the data in a meaningful way to our clients.

What you will do:

Design and implement systems for ingesting, transforming, connecting, storing, and delivering data from a wide range of sources with various levels of complexity and scale.  Enable other engineers to deliver value rapidly with minimum duplication of effort. Automate the infrastructure supporting the data pipeline as code and deployments by improving CI/CD pipelines.  Monitor, troubleshoot, and improve the data platform to maintain stability and optimal performance.

Who you are:

  • Bachelor's degree or higher in a quantitative/technical field such as Computer Science, Engineering, Math
  • 6+ years of software development experience
  • Exceptional skills in at least one high-level programming language (Java, Scala, Go, Python or equivalent)
  • Strong understanding of big data technologies such as Kafka, Spark, Storm, Cassandra, Elasticsearch
  • Experience with AWS services including S3, Redshift, EMR and RDS
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions

What to Expect in Our Hiring Process:

  • Phone conversation with Talent Acquisition to learn more about your experience and career objectives
  • Technical phone interview with hiring manager
  • Video or in person interviews with 1-3 engineers
  • At home technical assessment
  • Video or in person interview with engineering leadership
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Manager, Solutions Engineering - East
java big data linux cloud dot net Dec 12 2019
The Couchbase Solutions Engineering Manager leads a pre-sales engineering team through any and all sales engagements. This role directs the activities and goals of the Regional Sales Engineering team they are responsible for. 

The position works closely with their direct reports, Regional Sales Director, Regional Solutions Engineering Director, and territory-based Enterprise Sales Representatives to qualify prospective clients for Couchbase products and services within the assigned territory. Our Solutions Engineers are the primary technical field experts, responsible for actively driving and managing the technical part of a sales engagement. 

The Solutions Engineering Manager will provide guidance/mentoring to Solutions Engineers. The role requires a high degree of organization as it involves managing a team of 5-10 Solutions Engineers and multiple opportunities at once. In this exciting role, you will become an expert explaining NoSQL advantages, how Couchbase Server works and how it can be used to solve the customer’s problems; all with a good dose of getting customers excited about using this new approach for fast & scalable databases. 

The Solutions Engineering Manager is responsible for owning or guiding their team through the POC, RFI and/or RFP processes. The role will interface with many different types of Enterprise customers and their management teams. Ensuring the advancement of Solutions Engineers in their skills and knowledge to effectively compete in a global marketplace is an important aspect of this role. Additionally, a proven track record of success and demonstrated ability to effectively engage with sales teams are also key. 

Location: Eastern, USA - Remote

Responsibilities

  • Hire and lead a world-class team focused on delivering a unique, differentiated customer experience

  • Identify technical and soft skill training needs, perform assessments, and provide feedback to direct reports. Handle escalations and address conflicts

  • Grow capability of overall team to deliver training and consulting to customers to grow their product adoption

  • Partner with the team to evaluate new technical solutions to meet or exceed prospect and customer requirements

  • Build development plans for team members to ensure successful on-boarding and continuing education of the team

  • Demonstrate to customers how to solve their problems and meet their requirements with Couchbase Server and get them excited about NoSQL database technology

  • Develop and maintain an expert understanding of all Couchbase products and services. Establish and continuously update best practices for technical customer engagements in the fast-paced world of NoSQL 

  • Work closely with the sales team on account strategy and identifying additional opportunities in existing accounts including strategizing digital transformation initiatives for customers

  • Balance the workload of the Solutions Engineering team in concert with input from the Enterprise Sales Executive and Regional Sales Leaders

  • Mentor Solutions Engineering teams by providing hands-on technical guidance for building solutions such as microservices, Linux, Kubernetes, cloud deployments, databases, and messaging systems for pre-sales engagements 

  • Ensure the success of customer POC / Pilots through effective management of acceptance criteria and issue escalation/resolution

  • Support and participate with the Solutions Engineers in performing advanced technical presentations for customers, and prospects, remotely and in-person

  • Develop and deliver exceptional company/product presentations and demonstrations to manage, and maintain strong relationships with key customers

  • Work with all technical levels from managers, architects and developers and explain Couchbase Server and its uses

  • Be the technical product expert for customers and stay up to date on the NoSQL competitive landscape

  • Work with Product Management and Engineering to provide feedback from the field and represent the customer perspective as well as identify and write internal and external technical collateral

  • Establish, track, monitor and report on actionable metrics and KPIs for product adoption 

  • Represent Couchbase at conferences, industry, and sales events

Qualifications

  • 7+ years of experience serving in the capacity of a pre-sales engineer

  • Ability to teach other members of the team and effectively manage a team of highly skilled Sales Engineers 

  • Experience with traditional RDBMS including schema modeling, performance tuning and configuration

  • Proven ability to provide technical leadership to the account team and engineers

  • Hands-on administration and troubleshooting experience with x86 operating systems (Linux, Windows, Mac OS), networking and storage architectures

  • Familiarity with NoSQL databases or other distributed high-performance systems

  • Must be able to coordinate across various groups and functional teams

  • Ability to apply solutions, technology, and products to a business opportunity

  • Willingness to travel throughout the assigned region both by air and by car

Minimum Qualifications

  • Excellent communication and presentation skills with an ability to present technical solutions concisely to any audience

  • Experience engaging with developers and programming experience in at least one of the following: Java/.NET/PHP

  • Demonstrated passion for diving into technical issues and solving customer problems

  • Demonstrated critical thinking and advanced troubleshooting skills and qualities 

  • Ability to travel a minimum 25% of the time is required
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Manager, Solutions Engineering - West
java big data linux cloud dot net Dec 12 2019
The Couchbase Solutions Engineering Manager leads a pre-sales engineering team through any and all sales engagements. This role directs the activities and goals of the Regional Sales Engineering team they are responsible for. 

The position works closely with their direct reports, Regional Sales Director, Regional Solutions Engineering Director, and territory-based Enterprise Sales Representatives to qualify prospective clients for Couchbase products and services within the assigned territory. Our Solutions Engineers are the primary technical field experts, responsible for actively driving and managing the technical part of a sales engagement. 

The Solutions Engineering Manager will provide guidance/mentoring to Solutions Engineers. The role requires a high degree of organization as it involves managing a team of 5-10 Solutions Engineers and multiple opportunities at once. In this exciting role, you will become an expert explaining NoSQL advantages, how Couchbase Server works and how it can be used to solve the customer’s problems; all with a good dose of getting customers excited about using this new approach for fast & scalable databases. 

The Solutions Engineering Manager is responsible for owning or guiding their team through the POC, RFI and/or RFP processes. The role will interface with many different types of Enterprise customers and their management teams. Ensuring the advancement of Solutions Engineers in their skills and knowledge to effectively compete in a global marketplace is an important aspect of this role. Additionally, a proven track record of success and demonstrated ability to effectively engage with sales teams are also key. 

Location: Western USA, Remote

Responsibilities

  • Hire and lead a world-class team focused on delivering a unique, differentiated customer experience

  • Identify technical and soft skill training needs, perform assessments, and provide feedback to direct reports. Handle escalations and address conflicts

  • Grow capability of overall team to deliver training and consulting to customers to grow their product adoption

  • Partner with the team to evaluate new technical solutions to meet or exceed prospect and customer requirements

  • Build development plans for team members to ensure successful on-boarding and continuing education of the team

  • Demonstrate to customers how to solve their problems and meet their requirements with Couchbase Server and get them excited about NoSQL database technology

  • Develop and maintain an expert understanding of all Couchbase products and services. Establish and continuously update best practices for technical customer engagements in the fast-paced world of NoSQL 

  • Work closely with the sales team on account strategy and identifying additional opportunities in existing accounts including strategizing digital transformation initiatives for customers

  • Balance the workload of the Solutions Engineering team in concert with input from the Enterprise Sales Executive and Regional Sales Leaders

  • Mentor Solutions Engineering teams by providing hands-on technical guidance for building solutions such as microservices, Linux, Kubernetes, cloud deployments, databases, and messaging systems for pre-sales engagements 

  • Ensure the success of customer POC / Pilots through effective management of acceptance criteria and issue escalation/resolution

  • Support and participate with the Solutions Engineers in performing advanced technical presentations for customers, and prospects, remotely and in-person

  • Develop and deliver exceptional company/product presentations and demonstrations to manage, and maintain strong relationships with key customers

  • Work with all technical levels from managers, architects and developers and explain Couchbase Server and its uses

  • Be the technical product expert for customers and stay up to date on the NoSQL competitive landscape

  • Work with Product Management and Engineering to provide feedback from the field and represent the customer perspective as well as identify and write internal and external technical collateral

  • Establish, track, monitor and report on actionable metrics and KPIs for product adoption 

  • Represent Couchbase at conferences, industry, and sales events

Qualifications

  • 7+ years of experience serving in the capacity of a pre-sales engineer

  • Ability to teach other members of the team and effectively manage a team of highly skilled Sales Engineers 

  • Experience with traditional RDBMS including schema modeling, performance tuning and configuration

  • Proven ability to provide technical leadership to the account team and engineersHands-on administration and troubleshooting experience with x86 operating systems (Linux, Windows, Mac OS), networking and storage architectures

  • Familiarity with NoSQL databases or other distributed high-performance systems

  • Must be able to coordinate across various groups and functional teams

  • Ability to apply solutions, technology, and products to a business opportunity

  • Willingness to travel throughout the assigned region both by air and by car

Minimum Qualifications

  • Excellent communication and presentation skills with an ability to present technical solutions concisely to any audience

  • Experience engaging with developers and programming experience in at least one of the following: Java/.NET/PHP

  • Demonstrated passion for diving into technical issues and solving customer problems

  • Demonstrated critical thinking and advanced troubleshooting skills and qualities Ability to travel a minimum 25% of the time is required
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
REMOTE Senior Big Data Engineers
Surge  
aws python big data senior Dec 08 2019

SURGE is looking for smart, self-motivated, experienced, Senior Engineers who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

Senior Big Data Engineers, Hadoop, AWS, Python

Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

For immediate consideration, email resume with tech stack under each job and include your full name, cell phone number, email address and start date to: jobs@surgeforward.com

Share this job:
Business Development Director - Head of Americas Partners
executive saas big data cloud Dec 05 2019
Couchbase is looking for an experienced Business Development/Partner Executive to successfully recruit and manage new partners in North America and South America. 

Couchbase is building out our Business Development Team and has an exciting role for someone to build relationships with Route to Market Partners in the Americas. Successful candidates will have experience working in a company selling applications, middleware, database, data warehouse, data integration technology or big/fast data technologies. Experience with open source and SaaS or enterprise subscription software is also a key requirement. In addition, the ideal candidate will be passionate about recruiting and managing Partners to drive new revenue streams. You will have a demonstrated ability to think strategically and will help define and build the partner model at Couchbase and most importantly, be partner sales focused.

The Director – Head of Americas Partners leader needs to be adept at working with multiple organizations in the company to accomplish these activities.

In collaboration with Sales, Sales Enablement and Marketing, your responsibilities will include:

  • Drive the route to market partner strategy for Couchbase.
  • Identify the most important Partners in the Americas Region, with a focus on the United States.
  • Lead the expansion of the partner program and prioritization.
  • Establish contractual relationships with these partners.
  • Enable the partners to deliver Couchbase products and services to Enterprise Customers.
  • Develop joint marketing activities (such as webinars, conferences, meetups, lunch-and-learns) with partners to build pipeline.
  • Enable joint sales activities at the field level.
  • Work with the field sales team to close partner sourced and influenced deals.
  • Manage the partner relationships across all these activities.

Desired Skills and Experience:

  • 10+ years of business development (partner) experience
  • Experience recruiting and building out new partner channels in a growth stage private company
  • Experience working in a company selling applications, middleware, database, data warehouse, data integration technology or big/fast data technologies. Experience with open source and enterprise subscription software is also highly desirable.
  • Experience with Global Systems Integrators, ISVs, Regional System Integrators, VARs/Resellers
  • Enterprise software sales experience
  • Excellent writing and presentation skills
  • Strong project management skills with a focus on building new relationships. Ability to think strategically, develop tactics and execute
  • Ability to influence and identify champions, both internally and externally
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Software Engineer - .NET Platform Developer
Percona  
dot net java python scala php big data Dec 02 2019
If you like working with the developer community for an Engagement Database and being in the front lines of integration of our product into various technology stacks, this is for you.   This is your chance to disrupt a multi-billion-dollar industry, change how the world accesses information, and reinvent the way businesses deliver amazing customer experiences. As a Software Engineer in SDK and Connector engineering team, you’ll work on the developer interface to Couchbase Server for JVM platform languages including the Java SDK, future platforms like Scala and Kotlin and contribute to connectors and frameworks such as Apache Spark and Spring Data. In your daily work, you will help the developer community to innovate on top of our Engagement Database.  You will have one of those rare positions of working with a market leading product and an Open Source community of users and contributors. The skill set and expectations are…

Responsibilities

  • Take on key projects related to the development, enhancement and maintenance of Couchbase’s products built on the JVM platform core-io including the Java SDK and new platforms we add.  Create, enhance and maintain to other JVM related projects such as the Kotlin client, the Spring Data Connector and others.
  • Contribute to the creation, enhancement and maintenance of documentation and samples that demonstrate how Java based languages and platforms work with Couchbase.
  • Create, enhance and maintain various documentation artifacts designed to make it easy for developers and system architects to quickly become productive with Couchbase.
  • Maintain, nurture and enhance community contributions to the Couchbase community and forums from the overall Couchbase community.
  • Work with the growing community of developers who will want to know how to develop Java, Kotlin, Spring, .NET, Node.js, PHP, Python and higher level frameworks with applications built on Couchbase.

Qualifications

  • The right person for this role will be a self-motivated, independent, and highly productive individual, with ability to learn new technologies and become quickly proficient.
  • Must have a minimum of 5 years of software development experience in a professional software development organization.  Ideally, this would be working on platform level software.
  • Should be familiar with modern, reactive, asynchronous software development paradigms such as Reactor and Reactive Streams.
  • Should have experience with binary streaming wire protocols, such as those in Couchbase.  Experience with streaming protocols based on Apache Avro and data formats such as those in Apache Kafka would be good.
  • Should have familiarity with web application development beyond Spring Framework, such as in Play Framework or others.  The ideal candidate would have familiarity with web application or mobile integration development in at least one other platform such as .NET or Java.
  • Must be familiar with consuming and producing RESTful interfaces.  May be familiar with GraphQL interfaces as well.
  • Would ideally be able to demonstrate experience in large scale, distributed systems and understand the techniques involved in making these systems scale and perform.
  • Has the ability to work in a fast paced environment and to be an outstanding team player.
  • Familiarity with distributed networked server systems that run cross-platform on Linux and Windows is highly desired.
  • Experience with git SCM, and tools such as Atlassian, JIRA and Jenkins CI are also strongly desired.
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Senior Devops Engineer
python ruby docker aws senior devops Nov 13 2019

Senior DevOps Engineer (Contract)

New Context is a rapidly growing consulting company in the heart of downtown San Francisco. We specialize in Lean Security: an approach that leads organizations to build better, safer software through hands-on technical and management consulting. We are a group of engineers who live and breathe Agile Infrastructure, Systems Automation, Cloud Orchestration, and Information & Application Security.

As a New Context Senior DevOps Engineer, you will provide technical leadership with a hands-on approach. Our clients look to us to guide them to a solution that makes sense for them, and you should expect to provide thought leadership, design, and implement that solution.

Expect to heavily use Open Source software to take on challenges like delivery of highly secured containers, management of IoT devices or building Big Data ecosystems at petabyte scale and beyond. You will utilize our core methodologies - Agile, Lean, TDD and Pair Programming - along with your fluency in DevOps - to implement robust and reliable systems for our clients.

You will work with our clients and other New Context team members while working from the New Context office, at client sites, or from your home. We foster a tight-knit, highly-supportive environment where there are no stupid questions. Even if you may not know the answer immediately, you'll have the entire company supporting you via Slack, Zoom, or in-person. We also host a daily, all-company stand-up via Zoom, and a weekly company Retro, so you won't just be a name on an email.

At New Context, our core values are Humility, Integrity, Quality & Passion! Our employees live these values every single day.

Who you are:

  • A seasoned technologist with 5+ years work experience in a DevOps, SRE, or Continuous Integration role;
  • Experienced in Open Source web technologies, especially in the areas of highly-available, secure systems;
  • Accustomed to implementing cloud-based solutions (AWS, Google Cloud, Azure) with significant work experience in public cloud technologies;
  • Have developed production-quality applications in an Agile environment;
  • Fluent in one or more high-level languages, ideally Ruby and/or Python;
  • Familiar with Infrastructure as Code (IaC) and automated server provisioning technologies;
  • Experienced as a technical lead on technical projects;
  • An excellent communicator, experienced working with external clients and customers and able to communicate productively with customers to explain technical aspects and project status;
  • Able to think on your feet and learn quickly on-the-job in order to meet the expectations of our clients;
  • A great teammate and a creative and independent thinker.

Bonus points if you are:

  • Comfortable as a technically hands-on Project Manager;
  • Experienced managing teams;
  • Happy and effective in a consulting role;
  • Familiar with: TCP/IP, firewall policy design, social engineering, intrusion detection, code auditing, forensic analysis;
  • A believer in automated tests and their role in software engineering;
  • Able to translate complex concepts to business customers

Technology we use:

We tailor solutions to our customers. You might work on projects using any of the following technologies:

  • Automation: Chef, Puppet, Docker, Ansible, Salt, Terraform, Automated Testing
  • Containerization Ecosystem: Docker, Mesosphere, Rancher, CoreOS, Kubernete
  • Cloud & Virtualization: AWS, Google Compute Engine, OpenStack, Cloudstack, kvm, libvirt
  • Tools: Jenkins, Atlassian Suite, Pivotal Tracker, Vagrant, Git, Packer
  • Monitoring: SysDig, DataDog, AppDynamics, New Relic, Sentry, Nagios, Prometheus
  • Databases/Datastores: Cassandra, Hadoop, Redis, postgres, MySQL
  • Security: Compliance standards, Application Security, Firewalls, OSSEC, Hashicorp Vault
  • Languages: Ruby, Python, Go, JavaScript

All applicants must be authorized to work in the U.S. We will not sponsor visas for this position.

We are committed to equal-employment principles, and we recognize the value of committed employees who feel they are being treated in an equitable and professional manner. We are passionate about finding ways to attract, develop and retain the talent and unique viewpoints needed to meet business objectives, and to recruit and employ highly qualified individuals representing the diverse communities in which we live, because we believe that this diversity results in conversations which stimulate new and innovative ideas.

Employment policies and decisions on employment and promotion are based on merit, qualifications, performance, and business needs. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Share this job: