
datadataukuk
We're looking for a graduate data scientist to join our data team.
< class="h3">The Data TeamThe data team is composed of Data Scientists and Data/Software Engineers at the forefront of generating value from a rich repository of data sources. Its function is critical to the day-to-day running of the business, and it also spearheads the development of novel ideas to exploit new commercial opportunities.
You can expect to work on:
- The Machine Learning based transaction categorisation engine that underpins the budgeting capabilities of the app and affordability checking, e.g. for mortgages.
- Analysis of the characteristics of the user base, i.e. user segmentation, to support business decisions. For instance, what’s the current financial situation of the users of the app, what are their goals and how should the product be developed to aid the user in their journey?
- The development of targeted nudges based on current and historical data to ensure financial wellness. As an example, a well known principle is that users should spend 50% of their net income on essential expenses, 30% on wants and 20% on savings & investments; you could be developing techniques to measure how well someone aligns with this viewpoint and helping the user achieve a state of financial zen.
Requirements
You should:
- Have 3 or more years of experience as a Data Scientist.
- Be fluent writing code in Python and queries in SQL. We use Jupyter notebooks, Python and Pandas frequently for analysis.
- Understand a wide range of classification/regression algorithms and have applied them using libraries such as scikit-learn.
- Have implemented regression analysis and understand how to interpret the results. More generally, you know how to apply appropriate statistical techniques to assess the significance of results.
- Be able to take raw data sets and build narratives around insights you discover; presenting results in a way a non-technical audience can understand and act on.
- Appreciate the importance of data cleaning and have previously applied a range of approaches to achieve this.
- Have a bachelor's or master’s degree in a numerical or engineering subject such as Data Science, Mathematics or Computer Science.
Benefits
We champion flexibility, and we trust and respect our employees to deliver results in a way that best suits them, working around their own lives and commitments. As well as a truly flexible approach, we also offer a fantastic range of benefits, including:
- 10% contribution towards your Pension from your very first day with us;
- 25 days of holiday (plus bank hols), rising to 30 days after two years;
- choose whether to take your entitlement to UK bank holidays at other times more appropriate to your own celebrations or days of significance
- private medical insurance, including cover for pre-existing conditions, plus dental and optical benefit;
- 6 week Moneyhubber Family Pay when you become a new parent;
- permanent health insurance and life cover (death in service);
- employee assistance programme;
- professional development support;
- life event leave;
- holiday purchase and more.
We embrace flexible working, including remote working on UK-hours (and can help with the cost of setting up your home office if needed). If you prefer to come in sometimes, we have an office in the centre of Bristol with all of the usual facilities. On occasion, we like to get together in person - either at HQ in Bristol or elsewhere - and would like you to join us for these sessions if your personal circumstances allow.
< class="h3">Interview ProcessWe are building amazing and erse teams. We always strive to create a fair selection process that gives every candidate an equal chance. Our CVs are anonymised prior to being screened and the process itself is designed so that we can be consistent when evaluating candidates and minimise the effects of unconscious bias.
We have the following interview stages:
- (Optional) initial 30 minute interview for you to gain a better understanding of the role, and the culture and benefits we offer at Moneyhub. This is usually with the Lead Data Scientist.
- Take home technical exercise relevant to the role, carried out in your own timescales (between 1-3 hours).
- Final interview with members of the team you are most likely to be working with, to go through your technical exercise and in depth on the areas most relevant to this role.
We want you to finish the process with us feeling that you were able to demonstrate you at your best, so at each stage we will try to make sure you are at ease and as comfortable as possible. If you are concerned by any part of the process then please let us know and we can adjust accordingly.
< class="h3">AdjustmentsIf you require any reasonable adjustments to do this role, please include in your application.

< class="h3">Who is Current Health?

Current Health is a leader in the movement to transition more healthcare into the home. We’re a technology company, but beyond our best-in-class hardware and software capabilities, we help healthcare organisations operationalise care at home and deliver excellent patient experiences through helping with the last mile of care logistics and services. We partner with leading healthcare organisations to advance home-based care in the US, the UK, and around the world.
More than doubling in size in the last year, we’re building a global team united in one purpose: helping people live healthier, longer lives by predicting disease and enabling earlier treatment. At Current Health, we seek to put patients first, solve problems with curiosity and creativity, pursue uncommon collaboration, and never stop growing. We’re a remote-first company, which means that everything from how we work, communicate, and socialise is shaped by tools and culture that prioritise collaboration and support independence.
< class="h3">What does a Data Engineer at Current Health do?
As a specialist in data engineering, you will help us build a new data platform and associated data products to meet new challenges as we grow as a business and gain increasing numbers of customers and use-cases. As well as helping support our services across our software platform and applications, you will also help support and evolve our existing data products and provide analytical support to external clients and Current Health teams.
We are currently building a new data platform whereby we will make all our data accessible in one place and build data products that use this data thus allowing our engineers, data scientists, bio-medical engineers and clinical researchers to perform analytics which will help us to drive patient engagement, clinical impact, and client growth.
< class="h3">How we work as a teamWe would describe ourselves as self-organising and collaborative with a continuous improvement mindset. We value knowledge sharing and making decisions as a team, regardless of role or level of seniority. We encourage collaboration via e.g. pair programming and holding regular knowledge share sessions . We seek to continuously improve as a team and we also look to support each other's growth.
Requirements
< class="h3">We want to hear from you if…You meet some or many of the points below. This is some of the experience we would be looking for to help you be successful in our team:
- Experience in real-time stream or batch processing.
- Designing and implementing systems with scale in mind.
- Deep knowledge of at least one modern programming language (we use Python and Java).
- Proficiency in data manipulation using e.g. SQL, Spark, pandas or similar.
- An eagerness to learn new languages and technologies as required.
- Writing code with a focus on testability.
- Be confident reviewing and troubleshooting code.
- Have implemented solutions in a cloud computing and networking environment (we use AWS along with terraform/IaC).
- Collaborating outside your own team. We work closely with data scientists, product teams and other consumers of data products.
- Querying and analytics: Python, SQL, dbt, Databricks, AWS Athena, AWS Glue
- Orchestration: Airflow
- Storage: PostgreSQL (RDS), Redshift, DynamoDB, AWS S3
- Monitoring: DataDog, CloudWatch
Benefits
- Spec your own environment
- Competitive bonus scheme
- Salary Exchange Pension scheme (5% employee, 3% employer contribution)
- Private Medical Insurance through Vitality
- 2 x Life Assurance cover
- Critical Illness cover
- Employee Assistance Program
- £10 pcm flex pot to use toward benefits in our Benni benefits portal
- On call allowance (Only payable if and for so long as you provide on call services)
- Flexible, autonomous working environment
- Bike to work scheme
- Give as you earn through payroll


location: remoteus
Data Specialist (Remote)
Job Category: Operations
Requisition Number: DATAS002077
Full-Time
Remote United States
Job Details
Description
At Truckstop.com, we have transformed the entire freight-moving lifecycle with our SaaS solutions. From freight matching to payments and everything in between, we are the trusted partner for carriers, brokers, and shippers alike.
We lead this industry forward with our One Team mindset committing to principles such as assume positive intent, have each other’s back, and be your authentic self. Our drive for greatness produces high expectations, yet our regard for humans is even higher.
Join a team of brilliant minds and generous hearts who care deeply about other’s success.
Position Summary:
The Data Specialist will examine certificates of insurance, analyze existing information and, if needed, update certificates of insurance. This position is critical to ensuring that we are providing accurate information to our clients so they can make effective decisions when hiring a carrier. In addition, this data is also crucial to the carrier to be eligible for hire.
Essential Job Functions:
- Analyze the information listed on the certificate of insurance to determine if the new certificate will need to be updated
- Validate the certificate is attached to the proper Insured in our system
- Identify, log, and correct certificates attached to the incorrect Insured
- Prioritize and process client certificates; some may require rush processing
- Accurately input data from each certificate into our database
- Request revised certificate from insurance agency if necessary
- Research internal notes within our system prior to taking action on verifications
- Call insurance producer for verification
- Compare information on FMCSA, SAFER, and Intrastate websites to the information on the certificate/s to determine that our database has the most accurate data
- Data enterers are encouraged to provide feedback and make suggestions to improving the team’s performance
- Process cancellations and reinstatement notices
- Meet daily quota with a high degree of accuracy
- Verify daily work to ensure accuracy
Position Requirements:
- High school diploma or equivalent
- Experience with commercial insurance is preferred
- Effective problem-solving skills
- Ability to multi-task and learn quickly
- Excellent written and verbal communication skills and the ability to give and receive feedback
- Ability to perform repetitive tasks with a high degree of accuracy
- Comfortable working independently with little supervision
- Demonstrate effective time management and dependability
- Support a team environment, focused on the greater good and eager to pitch in to help when needed
The above description covers the most significant duties performed but does not include other related occasional work that may be assigned or is completed by the employee.
Truckstop.com provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
POINT Biopharma is an innovative, high-performance company focused on a hard problem: creating new cancer therapies that are more effective and have fewer side effects than traditional treatments. Our products are targeted radioligands, a technology that hold significant promise to treat cancer. Our company is growing rapidly: we just announced a phase 3 clinical trial for our novel radioligand based prostate cancer treatment, established our U.S. manufacturing center in Indianapolis, and have a pipeline of next generation oncology products in development. We are a creative and agile team accountable to a high standard: to be uniquely better. If you are searching for new and uniquely better career path, you should consider joining the POINT team.
TO BE CONSIDERED, APPLICANTS MUST BE LEGALLY ENTITLED TO WORK IN THE USA.
Location of work
The position will be working remotely with the anticipation of travel for team and client meetings. Currently, our employees are mostly working remotely and located within the Eastern Time Zone. To work in synergy, it is preferred that our applicants are working within the same Eastern Time Zone to accommodate team and client meetings.
Reporting to the Senior Director, Statistics, the Associate Director will provide an expert-level hands-on and technical support to all clinical development programs in POINT. The Associate Director will utilize the statistical expertise in the design, analysis, and interpretation of clinical trial programs; submission and defense of regulatory filings; and the generation of evidence to support product commercialization and reimbursement. This position will work cross-functional teams on multiple clinical development programs, health authority submissions, presentations, and publications across internal and external stakeholders.
Objectives
- Apply expert skills to investigate and use novel, statistical approaches for relevant statistical issues. This includes application of modelling and simulation to improve study design, and identification of opportunities to improve statistical methodology and/or provide practical solutions for problems.
- Conduct analyses of internal and external information to inform design decisions and the development of decision criteria.
- Authors study Statistical Analysis Plans (SAPs), including table, figure, and listing (TFL) shells.
- Collaborates with statistical programming and other functions to ensure timely delivery of TFLs with good quality.
- Interpret study results and provide review of statistical summary reports to study results.
- Plays a critical role in the development and review of the study synopsis, protocol, statistical analysis plan, clinical study report, and other regulatory submission documents, ensuring accurate and statistically valid deliverables.
- Contributes to the development of best practice to improve quality, efficiency and effectiveness.
Requirements
- PhD in statistics or biostatistics with at least 7 years of relevant experience or MS in statistics or biostatistics with at least 10 years of relevant experience.
- Experience in using or sound understanding of the value of Bayesian methods in scientific research preferred.
- Prior experience providing statistical support to early phase clinical studies preferred.
- Strong computational skills in R or SAS or other statistical applied software systems.
- Experience in oncology drug development within a pharmaceutical or biotechnology company is required.
Why joining POINT today will be the right career move for you:
There is no shortage of demand for smart, qualified and hardworking people like yourself – and we strongly believe POINT is the right career move for you. Here is why:
- You will help establish and define an organization that will change the course of cancer therapies. and help thousands of people each year
- You will be challenged, and are expected to challenge us, as part of a results-oriented, high performance team, where everyone leaves their egos at the door and lets the best ideas win
- You will be provided the opportunity to participate in the financial value creation that occurs as POINT grows and becomes a mature pharmaceutical company
- Most importantly, you will spend everyday working on something important
If what we are saying resonates with you, please submit a resume.
Benefits
- Health Care Plan (Medical, Dental & Vision) - effective day 1
- Retirement Plan (401k)
- Income Protection Insurance – (Life AD&D, Short & Long-term disability) – 100% employer paid
- Voluntary Income Protection – Premiums paid by employee
- Paid Time Off (Vacation, Sick & Public Holidays)
- $500 fitness reimbursement per year
- $100 cell phone reimbursement per month
All applicants who are offered employment with POINT Biopharma will be subject to a background investigation. Offers of employment are contingent on the successful completion of a background investigation conducted in accordance with POINT Biopharma policy and state law.
POINT Biopharma does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.


datadata🇺🇸
usa only🇺🇸
usa only
What we are looking for –
Someone with an eye for design as well technically savvy. As a Business Intelligence Specialist, you will need to have solid experience consulting with clients and delivering modern dashboards, reports, and analytics. This Business Intelligence Specialist is responsible for applying new technologies, so you must be intuitive and decisive, as well as having the ability to gather requirements from clients, and lead the design process. This is a brand-new position at a young startup, experiencing rapid growth and brings with it the opportunity to be part of shaping this role as well as creating a consulting firm that makes a difference for our clients.
What you will do –
You will be responsible for working with clients, in order to provide modern BI Solutions, and guiding them in their design process. In addressing complex client needs, you will be integrated into appropriately sized and skilled teams. This will give you the opportunity to analyze requirements, design data and analytical solutions, present future-state visions to clients and execute as part of the project team.
Your Duties and Responsibilities –
- Lead requirements gathering, design, and develop of business intelligence solutions, specifically within Power BI
- Help customers define their reporting and dashboarding strategies
- Mentor other Data & Analytics team members when it comes to working with visualization & reporting tools
- Form a trusted advisor relationship with the project and technology leaders both internally and externally
Requirements
What you must have to be considered –
- 3-5+ years of experience building data models & visualizations, preferably within Power BI
- 2+ years of experience working with Microsoft SQL
- Advanced knowledge of the DAX Language
- Experience in requirement Analysis and Project Delivery methodologies
- Familiar with Dimensional Modeling Concepts
What would be nice for you to have –
- Background in consulting or a client-facing role
- Experience working with M (Power Query)
- Experience with Python
- Strong Excel skills
- Knowledge and experience working with Azure based data services, such as Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, Azure Analysis Services, etc.
- Strong communication skills tying together technologies and architectures to business results. Passion for documentation & collaboration!
Benefits
Benefits and Perks –
- Competitive salary and benefits commensurate with experience.
- Mentorship from highly regarded industry specialists
- Exposure to the latest and greatest Microsoft technologies
- High growth potential for those with an entrepreneurial spirit
About OmniData –
OmniData is a Portland - based Data and Analytics focused consulting firm leveraging the Microsoft technology stack to help organizations build their Modern Data Estates, designed to serve their digital innovation needs for many years to come. To do this, we apply deep experience in Solution Architecture, Data, Analytics, and technology to simplify the complex.
OmniData is offering you the opportunity to work with the entire lifecycle of large Data Projects, focused on next generation data warehousing, with surface points to Analytics, Machine Learning and AI. We offer a collaborative work culture, that enables you to produce client results with a safety net from your team. You will get to work closely with very experienced consultants who will be able to provide mentorship and career guidance. At the same time, you will be rewarded for learning fast and executing within our teams to provide solutions for OmniData clients.
OmniData Is An Equal Opportunity Employer And All Qualified Applicants Will Receive Consideration For Employment Without Regard To Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected By Law.


datadataukuk
We are re-engineering the laundry process from the ground up; our aim is for clinically-clean, net-zero carbon emissions for the whole process, from the collection through washing and back to delivery. This has never been accomplished and we are determined to be the first truly sustainable and hygienic on-demand laundry company in the world.
We are looking for a senior data engineer to help build and expand Oxwash’s data collection and infrastructure.
Data powers both our customer-facing web app helping customers arrange the collection, delivery, and payment for their laundry; and internally-used tooling to help our operatives (laundry and delivery staff) manage their workload. Behind the scenes, our data is also responsible for driving growth marketing and operational decision making.
You will join a small, friendly and passionate data team and will be able to shape the future of Oxwash. Our ideal candidate is someone who can help us solve and identify the problems which will inevitably crop up as we scale and expand our operations and systems to handle more orders, from more customers, in more places.
All Oxwashers are encouraged and expected to work together across our ‘Lagoons’ (our dry- and wet-cleaning hubs, where the magic happens), to propel us from strength to strength. As part of this, we would like you to work from a lagoon once every 2 weeks. We currently have lagoons in London (Battersea, Cambridge and Oxford).
Challenges you'll work on
- Develop and improve the current data architecture with ETL processes that emphasise data security, quality, timeliness, scalability, and extensibility
- Maintain and expand use cases of Oxwash’s customer data platform (Twilio Segment), including adding new sources of data and destinations of data
- Identify key events in the lifecycle of both customers and garments to capture high value insights on our processes at Oxwash
- Implement and simplify self-service data query and analysis capabilities of our business intelligence tools (e.g. MixPanel)
- Collaborate with product, growth marketing, operations, and software development stakeholders to develop, implement, and validate KPIs
- Managing data transformation and troubleshooting data processing issues
Requirements
You have
- You are able to think about the strategic business value for decision making across data management lifecycle from data collection, processing, storage, and analysis
- A strong grasp of the different data needs of analytics and engineering teams and how they can be met
- You have a solid understanding of event-driven architecture
- A desire to build and maintain clear and detailed documentation of processes and approaches
- Technical qualifications:
- Proficient in Python and SQL
- Experience in database construction in Amazon Redshift
- Experience with a Customer Data Platform (e.g. Segment, Google Analytics, Snowflake, MixPanel etc.)
- 3-5 years of experience as a data engineer, supporting operational and marketing teams
What you'll need to be successful in the role
- You are able to clearly vocalise your opinions and are willing to listen to others
- You have an interest in working with modern technologies and engineering practices
- You are able to work autonomously and with teams located across our lagoons
- You are comfortable working in a startup environment
Our interview process
- Introductory phone call (30-60 minutes)
- Take-Home Case which involves a typical Oxwash problem (4 days to do the case)
- Technical Interview: 45-60 minute discussion of Take-Home Case Results
- Values Interview (30 minutes)
- Job offer or feedback about the interview process
Benefits
What’s in it for you?
- 💷 A salary of GBP 55,000 to 65,000 per year, depending on experience
- 📆 30 days of holiday (including bank holidays), plus your birthday off too!
- 🧩 Share options following successful probationary period
- 📊 Regular reviews and support in reaching your development goals.
- 🌎 Help transform the world in a high-impact, well-funded and fast-growing startup.
- 📞 Free, on-demand mental health support through Spill.
- 💧 Free laundry (of course!). No more pairing socks in your spare time...
- 💻 We'll invest in the tech you need to do the job to the best of your ability.
- 🚲 Salary sacrifice cycle scheme
We are an equal opportunity employer and value ersity at our company. We do not discriminate on the basis of race, religion, colour, national origin, sex, sexual orientation, age, pregnancy or maternity, marital status or disability status. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. However, this job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position.

DefinedTalent is proud to be supporting our client, a top Salesforce implementation partner.
As a Salesforce Data Architect (Remote), you are responsible for designing, developing, and managing data models for enterprise level reporting systems. You will design, implement and execute scalable integrations between customer enterprise applications that enforce data quality and adhere to strict consistency standards. At the same time, you will serve as an engineer/expert in ETL/Integrations, Data Warehousing, Business Intelligence, Master Data Management, and database design. Our client wants someone who is willing to work in a group setting with other technical and non-technical project members.
***Multiple Salesforce.com Certifications (Admin, Sales Cloud Consultant, Service Cloud Consultant, App Builder, Platform developer 1, Marketing Cloud, CPQ etc.) required
Responsibilities
- Establish integration and data standards, processes (ensure compliance) and operationalization of the integration solutions, assist in developing a secure, scalable governance and operating model which includes assessment of new applications and technologies to be integrated, protocols to support, and underlying infrastructure components to execute on
- Lead data management and migration projects
- Develop, maintain, and review data processes and architecture for both on-premise and cloud-based data systems
- Conduct team reviews and recommend data modeling, administrative, and design improvements
- Demonstrate ability to learn and research advanced data technologies and concepts, learning new skills and software as necessary
- Work closely with team members to optimize database queries, data mapping and data
- migration
- Demonstrate high-level knowledge of enterprise IT organizational, business, and technical environments
- Produce highly detailed document artifacts and maintain them for the duration of projects
- Assess all risks and offer mitigation strategies, communicating impacts to project across multiple channels
- Interpret client functional and informational needs and turn them into data requirements, process models, and active systems
- Support assigned systems throughout transition periods
Requirements
- Must have Salesforce experience.
- Multiple Salesforce.com Certifications (Admin, Sales Cloud Consultant, Service Cloud Consultant, App Builder, Platform developer 1, Marketing Cloud, CPQ etc.) required.
- Bachelor’s Degree in computer science, information systems, or related IT field with coursework in an information technology specialization related to data modeling.
- 5+ years of experience in an information technology area related to data architecture and data modeling.
- 5+ years of experience with enterprise-level ETL tools/platforms such as Informatica, Power Center, Cast Iron, Oracle Warehouse Builder (OWB), etc.
- 5+ years of experience working with complex relational databases, including normalization of data models.
- 5+ years of experience with Data Warehousing data population techniques for target structures such as Snowflake Schemas
- 2+ years of experience with Customer Relationship Management platforms
- Highly developed knowledge of Data Warehouse design principles with dimensional data modeling for Business Intelligence tools.
- Knowledge of performance tuning, database monitoring, SQL optimization, data archiving, table partitioning, indexes, etc.
- Experience in enterprise-level data migration/upgrade projects
- Experience in database administration and analyzing impact of underlying data structural changes.
- Experience in managing metadata and supporting data governance initiatives.
- Effective written and verbal communication skills, including presentation skills
- Self-motivated, able to work independently, and capable of responsibly maintaining schedules for time-sensitive projects.
- Strong attention to detail and organizational skills.
- Experience with Salesforce.com and its suite of products (Marketing Cloud, Service Cloud, Tableau, etc.) preferred
- Experience in multiple industries, with a proven ability to deliver on strategic and tactical business objectives
All qualified applicants will receive consideration for employment without regard to race, sex, ethnicity, religion, national origin, protected veteran status, gender identity, sexual orientation, or physical ability.
*Candidate must be legally authorized to work in the United States.
*Verification of employment eligibility will be required at the time of hire.
Benefits
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k, IRA)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Life Insurance (Basic, Voluntary & AD&D)
- Family Leave (Maternity, Paternity)
- Short Term & Long Term Disability
- Training & Development
- Work From Home

< class="h3">Company Description

Orchestrated by adept technical architects with over fifty years of applied expertise, KYNITE is an advanced technology company specializing in the disciplines of: Blockchain, Cloud Services, Big Data & Analytics, Artificial Intelligence, Enterprise, Staff Augmentation and Managed Services
- We are BigData Experts
- We are Cloud Experts
- We are Enterprise Architects
- We are Artificial Intelligence Innovators
- We are Technological Evangelists
- We are Doers
- We are Kynite
· 5+ years of experience designing, building, deploying, testing, maintaining, monitoring and owning scalable, resilient and distributed data pipelines.
· High Proficiency in at least two of Scala, Python, Spark or Flink applied to large scale data sets.
· Strong understanding of workflow management platforms (Airflow or similar).
· Familiarity with advanced SQL.
· Expertise with big data technologies (Spark, Flink, Data Lake, Presto, Hive, Apache Beam, NoSQL, ...).
· Knowledge of batch and streaming data processing techniques.
· Obsession for service observability, instrumentation, monitoring and alerting.
· Understanding of the Data Lifecycle Management process to collect, access, use, store, transfer, delete data.
· Strong knowledge of AWS or similar cloud platforms.
· Expertise with CI/CD tools (CircleCI, Jenkins or similar) to automate building, testing and deployment of data pipelines and to manage the infrastructure (Pulumi, Terraform or CloudFormation).
· Understanding of relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., key-value stores like Redis, DynamoDB, RocksDB), and Search Engines (e.g., Elasticsearch). Ability to decide, based on the use case, when to use one over the other.
· Familiarity with recommendation and search to personalize the experience for millions of users across million items.
· Masters in Computer Science or related discipline
< class="h3">QualificationsNice to have
Data bricks
Aws
Python Programming
Spark.
< class="h3">Additional InformationAll your
- This job is only for iniduals residing in US
- US Citizens, Green Card holders, EAD's can apply
- W2
Information will be kept confidential according to EEO guidelines.


location: remoteus
Title: Data Entry Specialist (Independent Contractor)
Location: United States
Wolfram, creator of Mathematica, WolframlAlpha and the Wolfram Language, is seeking a Data Entry Specialist to join its Sales Initiatives department. The Data Entry Specialist is responsible for ensuring data is accurately entered into contact/customer databases, assigning and classifying leads to route to the appropriate salesperson for follow-up and researching contact information when needed to help categorize or assign leads properly.
Responsibilities
- Entering, cleaning and assigning leads to the Sales team from web forms, email campaigns, program signups and other automatic-lead-creation sources
- Cleaning and structuring lists for importing as leads for the Sales team from webinars, events and other manual-lead-creation sources
- Adding account and/or territory classifications to all newly created accounts so they are properly assigned a territory/salesperson
- Compiling email addresses for communication
- Generating reports and refining/cleaning the data
- Creating campaign codes to track effectiveness in converting sales leads to sales opportunities
Preferred qualifications
- Experience with Microsoft Excel
- Experience with SugarCRM
**We are requesting U.S. based applicants only please**
AdQuick is building the operating system for advertising in the real world. One of our major components is a capturing attribution and ROI data for advertising based on the outdoor advertising medium.
Location: Flexible, ideally LA, NYC, SFResponsibilities:- Drive analytics capabilities from idea to execution: you’ll be working with our product and engineering team to bring data generated in the real world into dashboards and other data sources to make life easy for customers
- Interface with customers: analytics in outdoor advertising can be new to many customers and explaining why and how they should utilize AdQuick attribution capabilities is a major component of the role
- Analyze data: you should be proficient handling large data sets, drawing threads between separate sources of data, and translating those into insights
Background:
- Familiarity with real world movement and location data sets
- Excel expertise is a must and SQL proficiency is very nice to have
- Digital marketing, marketing analytics, media and advertising experience are great
- Brand-facing roles and interaction with customers on a frequent basis
- Experience owning product and interfacing with product, design and engineering to create new products
Personal characteristics:
- Customer obsessed - you can listen to customer pain points and translate them into product, process and user experience solutions
- Analytical: you have comfort and proficiency with numbers, can simplify complexities, and use data to inform strategy and product improvements
- Versatile: thrives in a rapidly-changing environment, able to wear multiple hats simultaneously without losing track of priorities
- Product and tech savvy: You’ll be the biggest power user of the platform and will be able to deftly illustrate to customers all the benefits of the platform and product
- Persistent and resilient: startups are hard, fast-changing and and require a level of grit many people lack
What you’ll be doing:
- Joining customer meetings to explain the value of AdQuick’s attribution capabilities
- Developing analytics capabilities roadmap
- Working with sales to educate the team on the latest attribution capabilities, case studies, and application for these insights
- Being creative in new applications of ROI and attribution data across the industry
- Vetting data partners and integrating new data sets into the AdQuick planning and attribution platform

About Us:
Resilience Lab is creating a world where communities of clinicians work collaboratively to produce better outcomes and improve mental health access.
Our erse team of best-in-class therapists are committed to improving client’s care experience throughout their therapeutic journey, by sharing peer insights, treatment plans, and outcome measurements. Technology powers Resilience Lab care delivery, automating all non-clinical work to foster therapeutic alliance and team-based care. Data leading to analytics & machine learning will take us from a tech-enabled business to a tech-driven business with structured & unstructured data powering outcome based care & continuous clinical learning.
Read our Mission, Vision & Values at https://www.resiliencelab.us/our-mission-values/
About The Role:
We are looking for an experienced Data Engineering Manager to build, lead and manage a team of data engineers and analysts working on building scalable data and analytics solutions. This role is a great fit for candidates with several years of software engineering experience delivering and building on data pipelines, warehouses, and analytics tools. This person will participate in architecture and design discussions, own end-to-end solution delivery, stakeholder management, partner with platform engineering, collaborate with other engineering teams, and will guide and mentor data engineers. The ideal candidate will have outstanding communication skills, proven data design and implementation capabilities, good eye for business, and a highly developed drive to deliver results. The person in this role will be technically proficient and excel at collaborating with engineers, analysts, and partners. They will be a self-starter, comfortable with ambiguity, and will enjoy working in a fast-paced dynamic environment. In this role, you will be reporting to the Head of Technology & Product.
Responsibilities:
- Use your knowledge of Data Architecture to build data stores for critical product features such as clinical outcome based care & monitoring, machine augmented clinical documentation, care coordination, clinical learning, automated consumer outreach, telemedicine, consumer self-scheduling.
- Collaborate with the rest of the technical team in architecture & design discussions to process & store high volume data sets.
- Contribute to all aspects of data engineering from ingestion, transformation, and consumption in addition to designing and building test-driven development, reusable frameworks, automated workflows, and libraries at scale to support analytics products.
- Uphold high engineering standards and bring consistency to the codebases and processes you will encounter.
- Be 100% remote. Work from anywhere; your office is wherever you are
Requirements
- 10+ years relevant work experience in the Data Engineering field
- 4+ years experience in leading and managing engineering teams
- 4+ years experience in building efficient & secure data stores in the cloud (AWS) and in cloud ETL processes.
- 4+ years experience working with Hadoop and Big Data processing frameworks (Spark, Hive, Nifi, Spark-Streaming, Flink, etc.)
- 2+ years experience building scalable, real-time and high-performance cloud data lake solutions
- Solid experience with relational SQL and programming languages such as Python.
- Experience with source control tools such as GitHub and related CI/CD processes
- Experience working with Big Data streaming services such as Kinesis, Kafka, etc
- Experience working with NoSQL data stores such as HBase, DynamoDB, etc
- Experience provisioning RESTful API’s to enable real-time data consumption
- Experience working in AWS environment primarily EMR, S3, Kinesis, Redshift, Athena, etc
- Experience with cloud data warehouses & RDBMS solutions.
- Experience with workflow scheduling tools like Airflow
- Solid understanding of algorithms, data structures, data architecture, and technical designs
- A passion for improving the mental healthcare experience.
- An understanding of the business impact of projects you build and the ability to prioritize projects by balancing between craft, speed, and the bottom line.
- A track record of strong ownership, initiative, tenacity, integrity, and humility.
- Placing a high value on collaboration, knowing that we can be better as a team than as iniduals.
What would set you apart:
- Healthcare industry experience.
- Experience building and scaling up multiple applications from the ground up.
- Experience with secure code guidelines, HIPAA and web accessibility (a11y).
Benefits
Compensation and Benefits:
- Competitive compensation and benefits (Medical, Dental and Vision) with FSA or HSA options.
- Quarterly performance bonus potential
- 401k
- Equity with refreshes at new funding rounds
- Discretionary Time Off and Company Paid Holidays
- The opportunity to be part of a mission driven company and make a difference in people’s lives.
- Work at a high-growth company

Our Mission
Tyk is on a mission to connect every system in the world. We’ve started by building an API Management platform.
The role:
We have a truly unique and exciting opportunity for a Head of Data and Analytics to come in and really make a difference. You will be a natural leader with a passion for data and analysis as well as a clear vision for how data can transform company strategy, working closely with the CEO to get it implemented.
You’ll get fully immersed, being actively involved and effective at delivering in a startup environment; hands-on leadership is necessary to grow a team through the startup phase into a mature organisation. You’ll be fundamental to the business, responsible for designing, implementing, deploying, and scaling the data function in a mixed environment of cloud/SaaS and self-managed products and systems.
We’re a globally distributed, remote-first business so you’ll need to be able to consider global operations, as well as be able to set up and manage a remote, geographically dispersed team - but we know you’ll be able to do that!
Here’s what you’ll be getting up to:
- Ensure the Company’s data is centralised into a single data lake and modelled to support data analysis requirements from all functional groups
- Create a common data framework so that all company data can be analysed in a unified manner
- Create a plan to develop and mature our ability to measure and optimise usage growth, mapped to our user journey and then implement it
- Develop a roadmap for the data and analytics function that clearly defines ownership and responsibility between the central data function and the functional groups
- Work closely with the executive team to deliver and implement the tools, processes and operations required to derive insights from our data
- Working directly with the SRE and DevOps teams to plan, provision and deploy a data stack
- Regularly meet with wider business stakeholders to identify data requirements and data resources
- Working independently or with engineering teams to build ETL pipelines for the data stack
- Work closely with our Research function to identify and analyse data resources that can enhance, educate, or provide insights for the wider company
- Hire and build an effective data team as Tyk matures and grows
Here’s what we’re looking for:
- Self-motivated: Independent and senior operator that can deliver results from the start with minimum guidance from the Founder Team
- Articulate: a strong communicator to be able to talk to business stakeholders
- Realistic: extremely hands-on and pragmatic (you don’t necessarily need to be a programming guru, but should be able to write solid SQL queries with several years of experience in doing so)
- Agile: experience working with APIs, bonus points if you’ve worked in or with API Management tools
- Entrepreneurial: has worked in a startup / scale-up environment before
- Technical: can demonstrate knowledge in all elements of a data & analytics setup:
- (Web) analytics
- ETL/ELT
- Databases
- Data Visualisation
Competencies:
- Natural leader: experience in a leadership position managing an analytics team
- Dynamic: experience with a high growth company using on-premise tools and on-demand (SaaS) transactional systems
- Practical: hands on experience with Python, SQL and relational databases
- Self-starter: previously led or work-led on a corporate data platform project
- Masterful: experience with open source data & analytics tools.
- Collaborative: experience working with multiple executive level business stakeholders
- Capable: must have experience with analytic and data visualisation tools
Benefits
Here’s why you should join us:
- Everyone has unlimited paid holidays.
- We have total flexibility in hours, as we believe creativity flows better when our people are given freedom to decide when they are most productive. Everyone is unique after all.
- Employee share scheme
- Generous maternity and paternity leave
- Company retreats
We all share the same vision - we value authenticity, respect, responsibility, independence, honesty, ersity and inclusion and most importantly treating others how you wish to be treated. We look for like-minded people who bring their personalities to work everyday, strive to achieve their personal goals and who are willing to challenge the way we do things, why? - to make what we do even better!
Our values tell the story of Tyk - here’s how:
- It’s ok to screw up!
We’ve found that it’s often the ‘stupid’ or unexpected ideas that turn out to be the successful ones - so try it, at least we can say we have!
- The only stupid idea, is the untested one!
It’s in our DNA - starting a business with founders 12 hours apart, giving our gateway away for free - sure, we did that, and we’d do it again!
- Trust starts with you - make it count!
Trust is a two-way street - instil it from day one!
- Assume best intent!
We have each other’s back - we’re all on the same team. Think before you speak or act.
- Make things better!
Always try to leave things better than when you found them - change is constant, inevitable and embraced! Be that change we want to see.
Who are Tyk, and what do we do?
The Tyk API Management platform is helping to drive the connected world and power new products and services. We’re changing the way that organisations connect any number of their systems and services. Whether internal, external, public or highly encrypted systems, Tyk helps businesses drive value across the retail, finance, telecoms, healthcare, or media industries (to name just a few!)If you’ve banked online, used an app to check the news, or perhaps even driven a connected car, API’s, and by extension, Tyk, make that possible. Founded in 2015 with offices in London, Atlanta and Singapore, we have many thousands of users of our B2B platform across the globe. Brands using Tyk range from Lotte, Bell, Dominos, Starbucks, to RBS and Societe Generale. We have a varied user base hailing from every continent – even Antarctica.
Total flexibility, default remote, radical responsibility
We offer unlimited paid holidays and remote working from anywhere in the world, for everyone, Why? Tyk was founded on the principle of offering flexibility and autonomy to our employees, we believe this allows our employees to achieve their best results. It also means we can build the best possible team, location and working hours are no barrier.
What’s it like to work here?! check it out: https://tyk.io/worklife/
Tyk is an equal opportunities employer and we are determined to ensure that no applicant or employee receives less favourable treatment on the grounds of gender, age, disability, religion, belief, sexual orientation, marital status, or race, or is disadvantaged by conditions or requirements which cannot be shown to be justifiable.
You can see more about us here https://tyk.io

Our Mission
Clarity AI brings societal impact to markets, and what that means to us is that we illuminate paths to a more sustainable world.
We do that by building-in a customizable, scalable sustainability tech kit to our clients existing workflows, which empowers them to efficiently and confidently assess, analyze and report on anything valuable to them or their clients and everything required by regulation.
We are a sustainability tech company, founded in 2017. We leverage AI and machine learning technologies to ensure sustainability dimensions are a focal point of decision making for professional investors, corporates and consumers..
In regards to financial markets, participants there have used ESG indicators (Environmental, Social and Governance) to evaluate dimensions of risk and impact. However, this kind of assessment is limited: it only considers how companies behave in these three dimensions but doesn’t consider the products and services the companies provide as part of their contribution, positive or negative, to society. Leveraging scientific research and the latest technologies, we provide decision makers with the most transparent, reliable and comprehensive capabilities and tools to assess, analyze and report on social and environmental impact.
We have received several awards that show the value that we are providing to the market:
- World Economic Forum Pioneer
- One of the most innovative projects in the US. Harvard Innovation Lab
- Top startup with impact worldwide by IMPACT Growth 2017
- Top 10 Fintech startup worldwide by BBVA Open Talent
- Young Global Leader to Founder and CEO, Rebeca Minguela 2017
Our Values
Our mission drives us forward. Our values guide us along the way. In order to meet our goals, we require passion from everyone on the team that is driven by our purpose. We require excellence in everything we do with inidual expertise as a key dimension to our success. We require everyone to not just talk the talk when it comes to ethics and values, but also to walk the walk and live those values by example.
We are different. We work hard to become the best place to work and pride ourselves on our culture:
- Data-driven: Promoting objective, fact-based and solution-oriented discussions.
- Independent: Accessible and unbiased, we aren’t satisfied with the traditional way of doing things.
- Transparent: Communicating feedback transparently, constructively and in real-time.
- Achievement-oriented: Demanding excellence and celebrating and rewarding the best..
- Flexible: Working flexibly in the broadest sense (e.g., schedule, location, vacation, styles).
Our Team
Our people are our main asset. Having doubled our headcount in 2021, we are now a team of more than 200 highly passionate iniduals coming from 30 different nationalities and composed of professionals from leading tech, consulting and banking firms, entrepreneurs, PhDs from top research institutions, and MBA graduates from top business schools. Our headquarters are in New York City, and we have additional offices in London, Madrid and Abu Dhabi. Additionally, we have a strong remote-workforce of team members located in 13 different countries. Together, we have established Clarity AI as a leading company backed by investors and strategic partners such as BlackRock, SoftBank and Deutsche Börse who believe in us and share our goals.
Our leadership inspires. Our Founder and CEO, Rebeca Minguela, is a successful entrepreneur who has been recognized as one of the most distinguished leaders under the age of 40 by prestigious institutions like the World Economic Forum. Rebeca is joined by a superior leadership team coming from erse backgrounds, countries and experiences.
We care about our people and think they deserve the best. We currently offer equity packages and other benefits on top of cash compensation. We aim to continue improving and shaping our offering in all areas of compensation and total rewards to ensure our teams’ efforts are rewarded and we remain competitive.
Challenges for this role
Working as a Data Delivery Manager, your main responsibility will be to ensure success of data collection final outcomes, helping us scale and build the most complete and reliable database of sustainability information. To accomplish this, you will interface with vendors and internal teams for all data obtained through data collection efforts.
The role provides an opportunity to contribute to the mission of Clarity by improving and transforming processes and scaling data collection projects while working in a multicultural, flexible and fast-paced environment.
Role Responsibilities:
Maximize performance of data collection vendors:
- Increase data collection agencies performance in terms of quality and efficiency (SLAs definition, quality and efficiency KPI follow up, identify and implement improvement measures) based on monitoring of vendors efficiency and quality
- Define and monitor quality and efficiency KPIs using business intelligence tools and Python/SQL
Optimize internal data research and data quality assurance efforts:
- Define and ensure execution of internal efforts targets and scope
- Maximize synergies of internal efforts in terms of increase of updated coverage and assess vendor quality to prevent overload of the internal team.
Drive the end to end delivery of collection of all data types (sustainability, funds and financial data):
- Internally communicate how collection efforts transformed into company outcomes (% updated data points in product)
- Create and maintain documented data standards and dictionaries, including how new data related are created, stored and labeled
- Coordination with internal data teams to ensure alignment between data collection deliveries and Clarity AI data platform
- Support the execution of the technical data delivery
Requirements
- Excellent oral and written communication skills in English
- Bachelor/University level degree (computer engineer, financial engineer, finance, economics, accounting, or similar field of study)
- At least 5 years of relevant working experience (ESG-related experience is highly preferred)
- Intermediate level of SQL and Python
- Expertise in driving vendor performance reporting and creating dashboards with BI tools (Quicksight, Power BI, Tableau, etc.)
- Good understanding of outsourcing models, delivery guidelines and processes
- Strong stakeholder management and presentation/oral communication skills
- Strong collaborative, organizational, and time-management skills with the ability to succeed in a fast-paced environment with changing requirements and priorities.
- Highly motivated and independent
This position can be based in Madrid or London. Remote opportunities are available as well.
*Only Cvs in English will be taken into consideration

< class="h3">

At Clarity AI, we are committed to bringing social impact to markets. We are a tech company started in 2017, leveraging AI and machine learning technologies to ensure environmental, social and governance dimensions are the focal point of decision making within financial markets.
We are now a team of more than 180 highly passionate iniduals coming from all over the world. Together, we have established Clarity AI as a leading startup backed by investors and strategic partners such as SoftBank, BlackRock, and Deutsche Börse who believe in us and share our goals. We have plans to continue growing our teams in Spain, Portugal, the UK, and the US this year so if you would like to join us on this rocket ship, keep reading!
Clarity is looking for a (Senior) Data Engineer for the Tech team. If you’re a Software/Data Engineer, specialized in Data and enjoy solving complex problems with code, developing and you are not afraid of learning new things, we are looking for you.
You will be part of the team delivering the different parts of our production-ready product while co-designing and implementing an architecture that can scale up with the product and the company.
Our tech stack is documented here: https://stackshare.io/clarity-ai/clarity-ai-data
Location: This position can be based within Europe or Remote.
KEY RESPONSIBILITIES:
- You will help to build a data platform to enable other teams to self-service high-quality data while performing data operations to support their day-to-day actions.
- You will join the data engineering team which is responsible for developing models, practises and systems to support the data lifecycle.
- You’ll support designing and creating all aspects of our ever-growing set of external and internal data pipelines, understand the problems, and tie them back to data engineering solutions
- You will be responsible of designing and building our ETL's
- You’ll transform raw data from different sources (batch and near-real-time) into intuitive data models, using a erse set of tools (such as Spark, Hadoop, Redshift, Kafka, etc.) to build robust data pipelines of high quality in a scalable fashion (regarding volume and maintainability).
- In order to develop feature and product features, you will form part of cross functional squads including people of many different teams.
- Work closely with Platform, Data Science and Backend teams, as well as product and tech teams.
- Fix, troubleshoot and solve bugs and issues
Requirements
WHAT WE ARE LOOKING FOR:
We like engineers who adapt quickly to new challenges and break them down in smaller pieces that a highly productive team will find high-quality solutions for. As an Engineer, you will have a solid technical foundation and a strong focus on growing your development skills.
They use best practices to ship high-quality code and continue to push their knowledge.
- You have more than 5 years of professional experience as a Software Developer/Data Engineer, and hold a degree in a quantitative field (CS, Engineer, Math, Physics, … ).
- You have built production data pipelines in the cloud, setting up data-lake and server-less solutions; on top of that, you have hands-on experience with schema design and data modeling.
- Proven knowledge of Python(Pandas) is required, Java/Scala will be a plus
- You have strong SQL skills and knowledge and familiarity with other distributed data stores such as Redis, ElasticSearch, Druid, Cassandra, Hbase.
- You are comfortable working in containerized cloud developments, and familiar with the integration of public and private APIs.
- Strong communication skills. The right candidate can thrive in an environment of asynchronous conversation, since a part of the Clarity team will be based remotely.
- Open to try new technologies and programming languages.
- Fluency in English
It will be a plus if you had:
- Experience in SaaS fast growing business would be preferable, although not required.
- Ideally experience in the financial sector or regulated markets.
- Familiarity with some other technologies / frameworks( Spring Boot, Jersey, Play..) is a plus
Benefits
WHAT WE OFFER:
Our people are our main asset. We have built a unique team and we all share 3 differential aspects: Excellence, Passion and Values:
- Our team comprises professionals from leading tech, consulting and banking firms, entrepreneurs, PhDs from top research institutions, and MBA graduates from top business schools.
- Clarity’s Founder and CEO, Rebeca Minguela, is a successful entrepreneur who has been recognized as one of the most distinguished leaders under the age of 40 by prestigious institutions like the World Economic Forum. Rebeca is joined by an excellent leadership team coming from erse backgrounds, countries and experiences.
We are different. We work hard to become the best place to work and pride ourselves on our culture:
- Fact based: Promoting objective, fact-based and solution-oriented discussions.
- Diverse: Encouraging ersity of personalities, cultures and experiences.
- Transparent: Communicating feedback transparently, constructively and in real-time.
- Meritocratic: Striving for excellence and rewarding the best.
- Flexible: Working flexibly in the broadest sense (schedule, location, vacation, styles).
We care about our employees and think they deserve the best.
- ESOP/ Phantom stock: We want our employees to really feel part of Clarity. That’s why we give them the opportunity to participate in our success with ESOP or phantom stock options.
- Training budget: Continued learning is important to us so we provide everyone with a yearly training budget.
- Additional benefits are available depending on your location.
We are a US company with a UK and Spanish subsidiary. We have offices in New York City, London and Madrid and have possibilities for remote working for many of our roles.
Clarity has received several awards:
- World Economic Forum Pioneer
- One of the most innovative projects in the US. Harvard Innovation Lab
- Top startup with impact worldwide by IMPACT Growth 2017
- Top 10 Fintech startup worldwide by BBVA Open Talent
- Young Global Leader to Rebeca Minguela 2017
Privacy Policy
Responsibility: Clarity AI Inc. 160 Greentree Drive, Suite 101, City of Dover, County of Kent, Delaware 19904, United States.
Purpose: management of selection processes for new employees.Legitimation: necessary processing for contract execution.Recipients: company of the Clarity Group in the country that would carry out the contract or where position is being offered.Rights: the interested party may exercise the rights of access, rectification, opposition, limitation, suppression, portability and not to be the object of automated decisions according to the Privacy Policy.More info: to consult the privacy policy in detail, click on the following link (Privacy Policy)

< class="h3">Company Description

You will join a world-class team of engineers and data scientists from Facebook, Uber, Amazon and Google. We are a fast growing consulting firm based in Toronto with clients ranging from leading startups building impactful technologies to Fortune 500 companies looking to scale their engineering and data capabilities.
< class="h3">Job DescriptionWe are looking for a data engineer who is passionate about analytics and helping companies build and scale data. You enjoy working with data and are motivated to produce high quality data tools and pipelines that help empower other data scientists. You are experienced in architecting data ETL workflows and schemas. Critical thinking and problem-solving skills are essential for this role.
< class="h3">Qualifications- BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
- Hands on experience working with user engagement, social, marketing, and/or finance data
- Proficient in Python (i.e. Pandas, Numpy, scikit-learn, etc), R, TensorFlow, amongst other data science related tools and libraries
- Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
- Deep knowledge on performance tuning of ETL Jobs, SQL, and databases
- Working knowledge of Snowflake
- Experience working with Airflow is a strong plus
- Devops experiences is a plus
We have very competitive compensation.
Work on cool projects based on your interests and skills. We believe in accountability and NOT micro-management.

< class="h3">Company Description

You will join a world-class team of engineers and data scientists from Facebook, Uber, Amazon and Google. We are a fast growing consulting firm based in Toronto with clients ranging from leading startups building impactful technologies to Fortune 500 companies looking to scale their engineering and data capabilities.
< class="h3">Job DescriptionWe are looking for a Data Science Lead who will:
-
Partner closely with our clients' executive and technical teams to guide their decisions relating to AI, analytics, data engineering and data in general
-
Proactively identify and champion projects that solve complex problems across multiple domains
-
Apply specialized skills and fundamental data science methods (e.g. experimental design, recommender systems and machine learning when needed) to inform improvements to our clients' businesses
-
Design and implement end-to-end data pipelines: work closely with stakeholders to build models, tables or schemas that support our clients' business processes
-
Empower teams both internal and external to build and maintain actionable KPIs, production-quality dashboards, informative deep es, and scalable data products
- BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
- 3-7 years of commercial experience as a data science lead solving high impact business problems, including 1+ years leading and maintaining high performance data science teams
- Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals
- Experience building data pipelines and driving ETL design decisions
- Experience delivering business impact through analysis and data products
- Technical leadership and management experience where you can jump into the code to the level that your peers admire and respect, and are also interested in helping mentor your peers
- Experience creating data product strategies, shipping data products, iterating after launch, and trying again
- Strong ability to prioritize and communicate to technical and non-technical audiences alike
- Working knowledge of Snowflake
- Devops experiences a plus
We have very competitive compensation.
Work on cool projects based on your interests and skills. We believe in accountability and NOT micro-management.


datadataukuk
Senior Data Scientist
Adarga is a rapidly scaling organisation, and we are growing our technology department. As a Senior Data Scientist within the product team, you will bring your expertise to a full stack technology team working to solve Natural Language Processing (NLP) linguistic challenges. Adarga is product focussed and is developing the world’s leading AI software to provide effortless access to knowledge from real world data and enable our customers to make better decisions.
We are a remote first organisation with 90 people based across the United Kingdom.
What will you be doing?
We are looking for an experienced Data Scientist with a strong mathematical background and experience in NLP. You will be hands-on with day-to-day project work focussed on developing the best solutions to deliver a roadmap which is aligned to core product features. You will also work alongside a product manager to develop new features. The senior data scientist will be working on projects which continually develop and improve our AI software using the appropriate MLOps tools to deploy, scale and monitor models in production.
As the Senior Data Scientist, you will report to the product tech lead and Data Science Manager.
Essential skills
- Detailed knowledge of AI techniques, including how to train, fine tune and apply deep learning models.
- NLP research is advancing at a rapid pace, an enthusiasm for continual learning is required to maintain your expertise.
- An experimentation mindset to solve business problems.
- Experience building Data Science solutions in a commercial environment and the ability to quantify improvements based upon analysis.
- Good awareness of software engineering and coding best practices.
- Excellent communication skills and awareness of project management techniques, capable of operating within a team of 6-7 people to influence and develop best practice.
- Ability to thrive effectively in predominantly remote working environment.
- PhD or equivalent professional experience in a data science role.
Nice to have skills
- Experience of modern NLP techniques is an advantage.
- Experience of using data science platforms and frameworks, for data tagging, model training and benchmarking.
- Experience building and deploying solutions to the Cloud.
- Experience of the following technologies would be beneficial, Python, Seldon, Helm, Pachyderm, AWS or other cloud technologies, Kubernetes, Docker, Knowledge graphs, Graph databases, SQL and Relational databases.
Further information
At Adarga we use Kanban principles and daily stand-ups to track work tasks and it will be your responsibility to attend these and document your work to ensure it is repeatable.
It will be necessary to take part in cross functional and cross team communication to enhance collaboration, these will take the form of knowledge shares, reading groups, chapter group meetings, workshops and innovation projects. You will be set quarterly OKR’s and these will be reviewed frequently with your line manager. As a senior member of the team, you may be required to mentor junior team members, contribute to technology discussions, support academic and AI community engagements, marketing events, and recruitment activities.
Contact
Find out more about Adarga: https://adarga.ai/careers
Adarga Benefits
- Hybrid working model available
- Adarga anywhere scheme offers you the chance to work wherever you like for part of the year
- Generous holiday package with an opportunity to buy and sell holiday
- Adarga share scheme available for all employees
- £1000 annual training budget
- Culture of knowledge sharing with Team Lunch and Learns
- Innovation Time- Collaborate with like-minded colleagues on projects that are important to you and the business
- Company MacBook
- Competitive Pension Scheme
Diversity & Inclusion at Adarga
Adarga recruits, trains, promotes & employs regardless of race, background, gender, colour, religion, disability, age, gender identity or any other protected status. In addition to this, we actively promote and encourage ersity of thought through multiple avenues. Adarga is dedicated to building and maintaining an inclusive environment where people feel valued, appreciated, and free to be themselves without prejudice.
Recruitment company statement
We approach everything with transparency and integrity. To avoid wasting anyone’s time, here is our policy on working with recruitment companies:
At present, we are not working with any new recruitment agencies, because we only want to work directly with inidual applicants who can contact our People Team through [email protected] Recruitment Privacy Policy

WHO WE ARE
Optimize Your Buildings and Infrastructure with Cartegraph.
Cities and Counties. Colleges and Universities. Parks, Utilities, and More.
Cartegraph builds safer, more resilient, and more sustainable communities through better stewardship of critical infrastructure. We design, build, and sell powerful, intuitive software solutions that help local government agencies, utilities, and educational institutions manage their physical assets and associated operations. With Cartegraph, users optimize the performance of their assets, drive efficiencies, maximize resources, and reduce costs through smarter, data-driven decisions.
JOB SUMMARY
As a Data Conversion Engineer, you will have personality to share and technical expertise to spare. We’re building and selling the most innovative operations management technology in the world, and we need someone to ensure data transformation and system integration for our clients is smooth and seamless. Interested?
THE WORK YOU WILL DO
- You’ll work full-time for great pay in a collaborative environment.
- You’ll take ownership of design, development and testing of all data transformation, extraction, and migration activities to the Cartegraph system.
- You’ll provide project delivery that leaves our users raving about the great experiences they have working with Cartegraph.
- Reviews technical publications, articles, and abstracts to stay abreast of technical developments in industry and works toward continuous quality improvement.
- Attends meetings as scheduled.
- Performs miscellaneous related duties as requested by management.
- Complies with all company policies and procedures
ADDITIONAL RESPONSIBILITIES
- Reviews technical publications, articles, and abstracts to stay abreast of technical developments in industry and works toward continuous quality improvement.
- Attends meetings as scheduled.
- Performs miscellaneous related duties as requested by management.
- Complies with all company policies and procedures
Requirements
WHAT YOU BRING TO THE TABLE
- You have a 4-yr. degree in an applicable field OR a 2-yr. degree with at least two years of relevant experience with data transformation.
- You know the ins-and-outs of T-SQL, MS SQL Server Management Studio, database structure and design methodology, data import techniques, and Microsoft Excel.
- Experience creating extract-transform-load capabilities using MS SQL Server, FME, python, or related technologies.
- You understand how to effectively discover and capture customer requirements as it relates to data conversion and other complex configurations.
- You’re familiar with data management and interoperability concepts and standards to help customers fully realize the value of their enterprise data.
- You’re an effective verbal communicator with the ability to explain complex issues clearly and simply.
- You’re an effective written communicator with the ability to build system design documentation.
- You’re a good listener and loyal advocate of customer success.
- You’re organized, you manage your time well, and you stand behind your work.
TRAVEL
Ability and willingness to travel 10%.
Benefits
We tried to narrow down to just 5 reasons to work here, but it was impossible...
- Opportunities to apply your skills and experiences with an established industry leader
- Options to grow your career with Cartegraph's Career Pathing program
- Collaborative work environment
- Co-workers with a passion for what they do
- Casual and flexible work environment
- Competitive pay and benefits
- 15 days of PTO in just your first year (whoa)
- Paid parental leave
- Wellness & EAP Program
- One week sabbatical every five years
- Paid holidays (plus a couple extra days)
- 401k plan with generous employer match
- Social events, coffee and snacks, foosball, and more..
Learn more about working at Cartegraph and our Midwest-Nice culture:
https://www.glassdoor.com/Overview/Working-at-Cartegraph-EI_IE335255.11,21.htm
In our efforts to recruit those who will elevate our success and ensure a proper career fit, we have a multi-faceted application process, As a part of our process, we invite you to complete the Culture Index Survey. The survey contains two questions and should take no more than 10-15 minutes. If you have questions, please contact [email protected].
Cartegraph is an equal opportunity employer. We celebrate ersity and are committed to creating an inclusive environment for all employees and do not tolerate discrimination of any kind. If you are unable to complete our online employment application or require an accommodation to do so, please call 563-587-3366 for assistance.

E&C has two openings for Data Analyst position - in Spain and in Melbourne, and you will join an international team working with energy data from all over the world.
As a Data Analyst you will work with ePoint - a unique web application developed by E&C which brings together and visualizes information on a global energy portfolio to the last detail. It holds information on their consumption, contract situation, hedging situation, risk management and on the energy markets. We are looking for an operator that will manage a set of client sites.
What will you do?
- Work with data to support E&C's contracting & risk management activities
- Deliver ePoint sites and reports
- Take responsibility of for correct and timely processing of energy prices, energy contracts and hedges of our clients around the world
- Make regular and random data quality checks
- Process and manage data of our clients in our SQL database
- Develop tailor-made reports/dashboards/graphs for our clients regarding their energy data portfolios, using Power BI
- Learn to become a specialist in different energy markets across the globe,
- Perform various data management activities relating to data reporting.
What do we offer you?
- A long-term and full-time contract
- Balance Days resulting in 46 days off / year
- Flexible hybrid/remote work mode possible after initial onboarding
- Challenging and interesting environment in the continuously evolving energy sector
- International work environment with more than 25 nationalities and English as main working language
- Flat company structure with agile way of working, offering autonomy and empowerment from day 1
- Intensive onboarding procedure with a buddy and an internal training program
- Regular discussions on your role(s) and competencies with your team leader, allow you to define a challenging career path that makes the best use of your talents and interests
- A competitive salary
What makes you a perfect match?
- Most of all, a can-do attitude and eagerness to learn
- 1-2 years of experience in data reporting or analytics is preferred
- Proficient excel and real interest in working with it every day , experience with PowerBi is a plus
- Business English
- Accuracy, being meticulous and detail-focused
- High sense of ownership


datadata🇺🇸
usa only🇺🇸
usa only
< class="h3">Company Description

M3 is the global leader in digital solutions in healthcare. We work with a range of sectors including healthcare, life sciences, pharmaceuticals, biotechnology and charities; services provided to these sectors include market research, medical education programs, promotional programs, clinical development, job recruitment and clinic appointment services. M3 has grown by more than 20% year-on-year for the past 15 years, achieving over 1 billion dollars in revenue annually.
Founded in 2000, with start-up investment capital from Sony, M3’s mission is to make use of the internet to increase, as much as possible, the number of people who can live longer and healthier lives, and to reduce, as much as possible, the amount of unnecessary medical costs. Key achievements of M3 Inc. include:
- Named in Fortune’s 2020 ‘Future 50’ list, ahead of Facebook and Amazon
- Ranked in Forbes’ 2020 Global 2000 list of the world’s largest public companies
- M3 is the only company incorporated after the year 2000 to be included in the Nikkei 225 Index
- Listed in Forbes’ ‘Asia’s 200 Best Over a Billion’ in 2019
- In 2020 M3 founded the ‘M3: Stop COVID-19 Fund’ and pledged one billion yen to support COVID-19 related initiatives
M3’s legacy lies in the power of trusted physician platforms. Around the world, M3 organizations leverage these highly engaged digital communities to deliver medical education, job placement, and market research, to improve global patient outcomes.
Because of our continued growth, we are hiring for a Database Administrator at NAS Recruitment Innovation, an M3 company.
< class="h3">Job Description- Manage databases (Azure SQL, MySQL, Mongo) in Production/QA/Dev environments including installation, configuration, upgrades, schema changes, etc.
- Provide best practices on design, availability, replication, security, audit, and backup/recovery methodologies. Continuously re-evaluate the infrastructure for opportunities for optimization
- Troubleshoot database issues and maintain database systems availability and scalability within production environments
- Perform capacity-planning exercises to properly identify required hardware, software, database configuration/architecture necessary to support application needs
- Experience (2-3 years or more) managing production database systems, preferably in a SaaS environment.
- Understanding of database packaged releases, patching strategy, and enforcing standards via tools and automation
- Experience with database server tuning, query optimization, replication strategies, storage engines, monitoring, capacity planning, backup, HA, etc.
- Experience in Analytics/BI/Data Warehouse, specifically Power BI/Azure Data Factory is a plus
- Ability to develop procedures, processes, and conventions to standardize database operations
- Understanding of RDBMS security administration
- Experience in scripting languages of your choice (PowerShell, Python, etc.) for monitoring and custom automation preferred
- Experience working with source code management tools (Git)
- Understanding of automated configuration management (Ansible, Terraform)
- Experience with Cloud based systems or Virtual environments (AWS, Azure, Google Cloud)
- Familiar with working in a GDPR/CCPA/HIPAA-compliant environment and how to design/ follow procedures as well as write documentation to maintain compliance preferred
- Previous Development/Operations experience a plus
Benefits:
A career opportunity with M3USA offers competitive wages, and benefits such as:
- Health and Dental
- Life, Accident and Disability Insurance
- Prescription Plan
- Flexible Spending Account
- 401k Plan and Match
- Paid Holidays and Vacation
- Sick Days and Personal Day
*M3 reserves the right to change this job description to meet the business needs of the organization


location: remoteus
Authorization Coordinator
at Headspace Health
Remote
Headspace and Ginger have recently merged to become Headspace Health! While roles are still being recruited separately on our respective websites, new hires from this point forward will be joining Headspace Health. For more information, please speak with your recruiter!
About the Authorization Coordinator role at Headspace Health:
As we grow our health plan partnerships and clinical volume, the Authorization Coordinator will play an important role to ensure insurance / EAP authorizations are in place – a key component to access clinical care, and successfully bill payers. This role will be responsible for proactively and accurately coordinating and obtaining information required in the authorization process. An enthusiastic, self-motivated and detail-oriented professionals with a desire for a fast-paced, high-growth environment would do well in this role.
This is a remote role in the US.
How your skills and passion will come to life at Headspace Health:
- Proactively coordinate all information required of the authorization process, ensuring complete, accurate and timely information collection and entry
- Efficiently obtain and track authorizations approvals from payers by phone, fax and electronic data exchange
- Accurately transcribe and verify fax authorizations to create eligibility files and electronic medical records
- Support the authorizations to billing pathway in submitting claims when needed
- Work claim rejections and denials related to authorizations
- Address questions from patients and other internal stakeholders (e.g. Member Support, Clinicians, etc.) regarding authorizations
- Effectively organize and manage workload to ensure timely completion
- Communicate responsively and positively with external partners and internal teammates
- Collaborate w/ clinical operations team as needed in support of access to care, and successful claims submission
- Meet weekly with manager and team to collaborate on inidual and team objectives and outcomes
What you’ve accomplished:
- BA/BS in Social Sciences or related field; or at least three (3) years of relevant experience in a health care physician billing environment
- Proficient working knowledge of G-Suite and organizational skills required
- Strong organizational, time management, and problem-solving skills
- Strong attention to detail, fast learner
- Strong interpersonal and written communication skills
- Comfortable with ambiguity and excelling in a scaling start-up environment
- Self-directed & motivated
- Genuine connection to Ginger’s mission and behavioral health
< class="h3">Company Description

As the industry leader in compensation management, Payscale is on a mission to help job seekers, employees and businesses get pay right, and make sustainable fair pay a reality. Empowering more than 53 percent of the Fortune 500 in 198 countries, Payscale provides a combination of data-driven insights, best-in-class services, and innovative software to enable organizations such as Angel City Football Club, Perry Ellis International, United Healthcare, Vista, and The Washington Post to make fair and appropriate pay decisions. Pay is powerful. To learn more, visit www.payscale.com
- Voted Seattle’s and Boston’s best places to work according to Built In 2022.
- Voted one of Seattle’s companies with the best benefits according to Built In 2022.
Does the opportunity to work with a truly unique compensation dataset that comprises nearly a third of the U.S. working population excite you? What about working on important issues like pay equity?
The Data team at Payscale works on a variety of different problems that have a direct impact on people's lives by improving compensation practices. We are a small team working on high impact projects that drive innovation in our customer facing products and across the organization. The Data team partners closely with data scientists, engineers, compensation professionals, product managers, and other internal stakeholders to design and develop the data sets that power our compensation tools.
What you’ll be doing:
- Serve as a project manager on larger and more complex updates and audits to our data sets. This can involve the creation of new international data sets
- Grow and maintain our job taxonomy by researching new and emerging jobs, writing job descriptions, and auditing existing jobs for improvements, consistencies, and reorganizations
- Work to improve our proprietary data sets by analyzing their structure similarities and differences and working to build crosswalks and translations between them
- Price jobs and conduct market analysis to determine competitive base and total compensation in multiple labor markets
- Audit existing and create new company job matches to our proprietary data sets
- Ensure the accuracy and quality of data sets by performing thorough and complex data analysis and validation
- Provide internal and external client support on all data sets including answering questions on data methodology, pulling sample data, compiling reports, creating data visualization dashboards, and joining client calls
- Work with cross functional teams to provide assistance on or insight to ongoing data analysis projects
- Understanding the ins and outs of our compensation datasets to become a subject matter expert on the various factors that influence pay
- Participate in client-focused working sessions to help our customers align their company jobs to the jobs offered in our data sets
- Prepare files and partner with our Database Administrators to update our database tables
Technology we use:
- Excel, SQL, Snowflake, Tableau
Requirements
- This is an experienced position where prior experience in compensation and data analytics is required
- A background in statistics, economics, mathematics, compensation, or other related fields
- Customer service, focused on problem solving and troubleshooting
- Experience pricing jobs and conducting market analysis in multiple labor markets
- Research and data analysis skills and mindset
- An aptitude for data and experience writing well structured SQL queries to get information from disparate datasets
- Experience with relational databases
- Excellent Excel skills and are comfortable with advanced functions including v-lookups, pivot tables, and experience writing macros is a plus
- Experience with data visualization – Tableau experience preferred
- Ability to juggle multiple projects
- Project management skills
- Ability to effectively communicate and share results with both technical and non-technical partners
- A desire to be the best, with a positive work attitude and inquisitive nature. You will be working in a highly flexible and fast paced environment
Benefits and Perks
The Highlights:
All around awesome culture where together we strive to:
- Pursue excellence every day
- Create customer value
- Compete to win (and lose!) as a team
An open and inclusive culture where you’ll learn and grow through programs and resources like:
- Monthly company all hands meetings
- Regular opportunities for executive leadership exposure
- Access to top-notch learning courses through LinkedIn Learning
- Regular manager check-ins to drive performance and career growth
- A growing network of employee resource groups
- And more!
Our more standard benefits
- A remote-first company culture where you’re able to work from your home, with company-provided equipment to set you up for success
- Discretionary Paid Time Off, giving you flexibility to rest, relax and recharge away from work.
- 15 Paid Company Holidays, including an extended Fourth of July Break, World Mental Health Day, and Juneteenth
- A comprehensive benefits plan including medical, dental, life, vision, disability, 401(k) covered up to 100% by Payscale
- Up to 12 weeks of paid parental leave, including coverage for pregnancy loss
- Flexible Spending Account options for pre-tax employee allocations
Equal Opportunity Employer:
We embrace equal employment opportunity. PayScale is committed to a policy of equal employment opportunity for all applicants and employees. It is our policy that employees will not be subjected to unlawful discrimination on the basis of race, color, religion, sex, age, national origin, or ancestry, physical or mental disability, veteran or military status, marital status, sexual orientation, political ideology, and any other basis protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including but not limited to: recruitment, hiring, transfers, promotions, training, discipline, termination, compensation and benefits, performance appraisals, education, and social and recreational programs.

KORE Software is the global leader in engagement marketing solutions comprised of KORE Software, Hookit, Sports Alliance and KPI (KORE Planning and Insights). Over 900 brands, venues, and sports organizations trust KORE's tools and platform as a source of truth that drives smarter partnership decisions and deepens connections with their audiences. Through Sponsorship Management and Evaluation, Ticketing & Fan Engagement, and Data Management & Analytics, KORE's two-sided network unites corporate sponsors, sponsorship properties, and their fans with solutions that accelerate and inform an organization's ability to act. This allows teams to enhance the fan experience, maximize returns, and spend time where it matters.
Role Overview
KORE Planning & Insights (KPI) is KORE's consulting practice providing key insights and expertise to the Sport & Entertainment industry in addition to our best-in-class software. KPI works with rights-holders, brands and agencies delivering integrated analysis and strategy on all aspects of sports business.
This team member will report directly to the Sr. Director Global Analytics & Strategy and will focus on projects for rights holders and brand clients. The focus will be in partnership analytics solutions (partnership valuation and evaluation) as well as advisory solutions (Go-to-market, partnership strategy). Furthermore, the role will work alongside KORE Software's insights, success, and product teams to identify opportunities to incorporate solutions into its suite of products, and work with the new business team to expand KPI's business.
The ideal candidate for this position will be self-motivated, able to meet deadlines, detail-oriented, data-focused, and have a strong ability to work well in a team-based setting with internal and external stakeholders. They will be expected to take ownership of client projects, assume responsibility for delivering great work and the ability to act as a trusted advisor to our clients. This will require strong project management, excellence in presentation building and delivery, as well as an in-depth understanding of sponsorship and marketing in the sports & entertainment industry They're also expected to demonstrate the ability to build data visualizations (Tableau/PowerPoint), and show the technical aptitude to expand their SQL scripting skills.
Duties and Responsibilities
Client Strategy
Partnership Advisory
- Leverage our deep relationships and knowledge base with agencies, brands, rights holders, and consumers across the globe to create strategies that drive both brand value and business results for our clients and their partners.
- Make informed recommendations to clients based upon their objectives, market trends, measurement, value drivers, potential opportunities etc.
- Facilitate the work to inform the strategy for key partner programs, and identification of growth opportunities to be used in renewals and negotiations.
Partnership Analytics
- Combine KPI's unique insights and technological expertise with additional research tools to inform strategy, partnership evaluation, and value associated with ROI & ROO.
- Provide partners with dynamic measurement and reporting across key metrics, and demonstrate performance of partners across a variety of data types and sources.
- Help implement the data analytics vision for business decision making through Tableau dashboard design for clients.
- Leverage data and reporting to drive greater adoption into customer software products
Account Management
- Leads the day-to-day management of select clients, with a focus on working directly with client team members, executing deliverables, measuring/analyzing data points, and providing strategic recommendations to both internal and external stakeholders.
- Ensure that projects are produced according to SOW, on-time, and are aligned with client expectations and budgets.
Departmental Development
Strategy & Offerings
- Identify offerings that KPI can provide clients & prospects to optimize commercial potential, and maximize current partnerships
- Work closely with KORE's success and sales organizations to understand current needs from both software and consulting clients, and design scalable product offerings to address client needs Monitor trends in the sports, entertainment, and technology landscapes to keep team members and executives up to date, and identify new opportunities.
Revenue Generation
- Design custom solutions for clients based on KPI's current offerings, along with the technical aspects needed to support them. Comfortable with how solutions can work within, and without the KORE Software suite of products.
- Works with sales and success team members to develop presentations and SOW's
- Lead presentations of proposals to clients both in written formats as well as face to face presentations
- Work with marketing team to develop internal education and sales material to be leveraged by the customer success and sales teams
Qualifications
- Bachelor's Degree required, MBA preferred.
- 3+ years of business strategy, consulting, or agency experience (preferably in the sports/entertainment industry)
- 1+ year(s) experience with data visualization tools (Tableau, PowerBI, Looker)
- Experience with SQL, SAS, R, Python, and/or another statistical programming language a plus
- Proficient in Microsoft Suite, particularly with PowerPoint, Word and Excel
- Strong organizational, analytical, and technical skills
- Knowledge of sponsorship and the sports industry
- Ability to communicate professionally in both a written and verbal context with external end-users and internal colleagues
We Provide
- Medical, vision, and dental insurance
- Peer to peer recognition program
- Cell Phone Allowance
- 401K Matching
Salary Range: $90,000 to $100,000 (dependent upon experience)

< class="h3">Company Description

We are Vericast. We create meaningful connections between business and the people that they serve-how, when and where it matters. By pushing the boundaries of data and insights, we spark discovery and inspire action to create profitable results.
< class="h3">Job DescriptionThe API Platform Engineer – Data Mesh is a critical contributor to developing and growing a new data architecture and framework for scalable analytics and innovation called Data Mesh. This new architecture and framework enable an organization to build valuable datasets and make them discoverable, safe, reliable, and reusable by applying software product thinking to the job of making datasets easy and fun to use. This role will build the key architectural and feature components for the data mesh, including but not limited to search, catalog, data API and documentation, data lineage, and user access. Additionally, the project includes implementing the orchestration, transformation, and data movement of the datasets involved in the data mesh architecture.
Day to day responsibilities will include:
- Developing the architectural components of the Data Mesh using multiple frameworks, services, and languages such as Python, Java and JavaScript.
- Developing data pipelines using both commercial and/or open-source data pipeline tools such as Apache Kafka, Apache Airflow, Confluent, AWS Data Pipeline or AWS Glue.
- Developing microservices, authentication and authorization capabilities.
- Developing and documenting APIs using commercial, full API lifecycle management tools and best practices.
- Build unit test cases for data products and execute them for coverage.
- Contribute to a cross functional Agile Scrum team using Agile Scrum best practices, ceremonies, and artifacts.
- 3-5+ years of hands-on application engineering experience, preferably in a cloud or hybrid cloud environment.
- Extensive experience building APIs & micro-services using techniques like REST, OpenAPI, Swagger, and Functions (AWS Lambda) is required.
- Working knowledge of API security certification, authentication, authorization, IP security setup, and end point configuration is preferred.
- Programming languages including Java, JavaScript, Python required.
- Experience building and documenting APIs with high availability and performance standards is required.
- Experience with APIs feeding frontend web applications is preferred but not required.
- Experience working in an Agile Scrum team is required.
- Experience building data pipelines and data orchestration systems is preferred.
Vericast offers a generous total rewards benefits package that includes medical, dental and vision coverage, 401K matching and flexible PTO. A wide variety of additional benefits like life insurance, employee assistance and pet insurance are also available, not to mention smart and friendly coworkers!
At Vericast, we don’t just accept differences - we celebrate them, we support them, and we thrive on them for the benefit of our employees, our clients, and our community. As an Equal Opportunity employer, Vericast considers applicants for all positions without regard to race, color, creed, religion, national origin or ancestry, sex, sexual orientation, gender identity, age, disability, genetic information, veteran status, or any other classifications protected by law. Applicants who have disabilities may request that accommodations be made in order to complete the selection process by contacting our Talent Acquisition team at [email protected] or 616-541-3145.
#LI-LC1
#DICE

< class="h3">Company Description
< class="h3">Job Description

Choose challenge. Choose pace. Choose growth.
Our Tech department is the driving force behind one of the most successful start-ups in Europe since 2012. Based in Berlin, we have an innovative and driven team that is using cutting-edge technologies to redefine the used car market.
You will be part of a committed and enthusiastic team of Data Scientists, working with statistical modeling and dealing with challenging problems. As a Data Scientist at AUTO1 Group (which includes wirkaufendeinauto.de, AUTO1.com and Autohero), you have the opportunity to work on advanced topics (pricing, recommendation systems, etc) that utilize your machine learning techniques. You will build machine learning models and deploy the models in production environments to drive AUTO1 business forward.
Together with business analysts and production teams, you will help to revolutionize the process and user experience of how people sell and buy cars online and make a real impact on the market.
Are you already located in the EU and looking for a remote position? If we have an entity where you are, you are good to go. Otherwise we are happy to assist you during your relocation to our Berlin headquaters!
Your role
- Translate business needs into practical data science questions, exercise your expert knowledge to deliver practical solutions, and brings positive impact
- Handle large amounts of data to drive data-driven business decisions
- Partner with engineering and product teams to drive actionable data insights
- Set priorities and support junior members to achieve business goals
- Identify business opportunities, collaborate with stakeholders and partners in tech to drive business growth
Your skills
- Advanced statistical and quantitative knowledge with practical experience, we welcome candidates with academic experience in a quantitative research field
- Demonstrated communication and visualization skills that enable you to present findings in a clear and unambiguous way
- Hands-on experience with a broad range of machine learning techniques and the ability to identify the appropriate technique in different scenarios
- Experience in effective collaboration, you are a natural team player and communicate fluently in English
- Proficient in SQL and Python. Experience in AWS/Azure/Google Cloud is a plus
Our offer
- Your work-life balance and working from home are part of our culture, so we offer you working from home options, flexible working hours and 30 days of vacation
- Relocation support to Germany includes visa assistance, apartment search, and help with costs
- Educational budget for your personal growth
- Team and company events like Hackathons, International Nights, Company Party, Football and more
- More than 90 different nationalities make up for a truly international and erse working environment
Apply by uploading your CV.
Contact: Igor Popov
AUTO1 Group is Europe's leading digital automotive platform. As a dynamic tech company, we are revolutionizing the automotive industry with our brands wirkaufendeinauto.de, AUTO1.com and Autohero. Our strong team of 5.000 people is dedicated to making Europe-wide car trading and transport as fast and stress-free as possible for our customers. Join us, building the best way to buy and sell cars online.
At AUTO1 Group we live an open culture, believe in direct communication, and value ersity. We welcome every applicant; regardless of gender, ethnic origin, religion, age, sexual identity, disability, or any other non-merit factor.
< class="h3">Qualifications < class="h3">Additional Information
< class="h3">Company Description
< class="h3">Job Description

We’re looking for a Senior Data Reliability Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers.
As a Senior Data Reliability Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the storage and management of data for use in transactional and analytical applications. To be successful in this role, you’re passionate about distributed data storage systems on the cloud with experience in relational and non-relational databases. If you’re a seasoned engineer comfortable and excited about building our next generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!
This position reports into the Senior Manager, Foundational Data Platform and can be based in any of our US offices or remote. We’re looking for someone to join us immediately.
What you’ll do:
-
Participate in the design and implementation of our next generation data platform for the construction industry, primarily focused on building and supporting distributed databases
-
Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
-
Design and implement self-service tools for developers to more easily deploy and manage databases for their services
-
Help identify and propose solutions for technical gaps in our data platform by running proof of concepts and experiments
-
Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in data systems to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools
-
Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
-
Stay up to date with the latest data technology trends
What we’re looking for:
-
Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience
-
2+ years of experience provisioning and operating databases at scale like Postgres, Elasticsearch, DynamoDB, Neo4j, Mongo, Cassandra, etc.
-
5+ years of experience building and operating cloud-based, highly available, distributed, and scalable databases with large amounts of data
-
Experience and proficiency managing infrastructure as code
-
Experience and proficiency in one or more scripting or programming languages (Python, Shell Scripting, Java, Ruby, Go, etc.)
-
Experience implementing and supporting data lakes and streaming technologies (Kafka) is a nice-to-have
-
Strong passion for learning, always open to new technologies and ideas
If you'd like to stay in touch and be the first to hear about new roles at Procore, join our Talent Community.
About Us
Procore Technologies is building the software that builds the world. We provide cloud-based construction management software that helps clients more efficiently build skyscrapers, hospitals, retail centers, airports, housing complexes, and more. At Procore, we have worked hard to create and maintain a culture where you can own your work and are encouraged and given resources to try new ideas. Check us out on Glassdoor to see what others are saying about working at Procore.
We are an equal opportunity employer and welcome builders of all backgrounds. We thrive in a erse, dynamic, and inclusive environment. We do not tolerate discrimination against employees on the basis of age, color, disability, gender, gender identity or expression, marital status, national origin, political affiliation, race, religion, sexual orientation, veteran status, or any other classification protected by law.
Perks & Benefits
You are a person with dreams, goals, and ambitions—both personally and professionally. That's why we believe in providing benefits that not only match our Procore values (Openness, Optimism, and Ownership) but enhance the lives of our team members. Here are just a few of our benefit offerings: generous paid vacation, employee stock purchase plan, enrichment and development programs, and friends and family events.


datadata🇺🇸
usa only🇺🇸
usa only
< class="h3">Company Description

Etsy is the global marketplace for unique and creative goods. We build, power, and evolve the tools and technologies that connect millions of entrepreneurs with millions of buyers around the world. As an Etsy Inc. employee – whether a team member of Etsy, Reverb, Depop, or Elo7 – you’ll tackle unique, meaningful, and large-scale problems alongside passionate coworkers, all the while making a rewarding impact and Keeping Commerce Human.
< class="h3">Job DescriptionWhat’s the role?
Etsy's millions of buyers and sellers generate petabytes of data that we use for data science, experimentation, marketing, and powering seller and buyer experiences. The Data Engineering team's work helps Etsy continuously increase the quality of these experiences by providing powerful datasets and services.
The Data Engineering team is looking for an enthusiastic and motivated Engineering Manager to lead the Batch Engines team in our Data Platform department. This department owns all the infrastructure and developer tools that enable other engineers and data scientists to write data pipelines. The Batch Engines team is responsible for the infra and tools around our core batch processing engines: Spark and Dataflow. Opportunities include improving reliability, efficiency, as well as improving the developer experience by making things like environment setup, interactive exploration, job optimization, monitoring, and testing easy for our internal users.
This is a full-time position reporting to the Engineering Director of Data Platform & Infrastructure.. For this role, we are considering candidates based in [country] who are either remote, flex, or office-based. Etsy offers different work modes to meet the variety of needs and preferences of our team. Learn more about our flexible work options and vaccination policy here.
What’s this team like at Etsy?
- Our team is dedicated to building software that reduces toil, increases access to data, and respects privacy.
- We seek to build scalable systems that are flexible but maintainable.
- Our Big Data stack includes Spark, Kubernetes, Python, Java/Scala, and we’re users of Google Cloud Platform, particularly Dataproc, Dataflow, and BigQuery.
What does the day-to-day look like?
- You are responsible for attracting, mentoring, supporting and retaining a team of top-notch engineers who design, build and support Etsy’s Big Data infrastructure.
- You drive the technical and strategic vision of the team, and help translate day-to-day issues into coherent and robust systems.
- You collaborate with external (sometimes non-technical) stakeholders and customers throughout the company.
- You plan and prioritize the team’s road map, and champion team projects internally.
- Of course, this is just a sample of the kinds of work this role will require! You should assume that your role will encompass other tasks, too, and that your job duties and responsibilities may change from time to time at Etsy’s discretion, or otherwise applicable with local law.
Qualities that will help you thrive in this role are:
- You have a passion for data related services, and understand good systems design.
- If you have specific experience building and supporting platforms and pipelines for data processing, that is a plus, as is any experience with cloud services.
- If you have experience building platforms or tools where the main users are other engineers or data scientists, that is a plus.
- You are comfortable translating sometimes ambiguous business problems into plans that our engineers can act on and feel passionate about.
- You care about facilitating learning and growth on the team - you put our engineers first.
- You are thoughtful about all the ways an engineering manager can have an impact.
What's Next
If you're interested in joining the team at Etsy, please share your resume with us and feel free to include a cover letter if you'd like. As we hope you've seen already, Etsy is a place that values iniduality and variety. We don't want you to be like everyone else -- we want you to be like you! So tell us what you're all about.
Our Promise
At Etsy, we believe that a erse, equitable and inclusive workplace furthers relevance, resilience, and longevity. We encourage people from all backgrounds, ages, abilities, and experiences to apply. Etsy is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. If, due to a disability, you need an accommodation during any part of the interview process, please let your recruiter know. While Etsy supports visa sponsorship, sponsorship opportunities may be limited to certain roles and skills.
For U.S. roles only:
Many Etsy roles are open to remote candidates, and you'll be able to identify which ones within the location header of each job description. We're open to remote hires from all U.S. states except Hawaii and Alaska. For candidates who will work remotely from Colorado, visit this link for information related to Colorado's Equal Pay for Equal Work Act.

< class="h3">Company Description

We are a multinational team of iniduals who believe that with the right knowledge and approach, technology is the answer to the challenges that business face today. We have been bringing this knowledge and approach to our customers since 2016, helping them translate technology into their success.
With Swiss roots and our own development team in Lima and throughout the region, we offer the best of both cultural areas: the talent and passion of Latin American professionals combined with Swiss organization skills and mindset.
< class="h3">Job DescriptionWe are looking for a Snowflake Data Engineer to work 100% remotely.
< class="h3">Qualifications- Only for residents in Latin America
- At least +6 years of experience as a Data Engineer
- 5+ years ETL/SQL experience
- Hands on 1-2 years of experience with Snowflake/SnowSQL
- AWS experience preferred
- Advanced English level
Nice to have:
- Informatica PowerCenter

< class="h3">Company Description

We are a multinational team of iniduals who believe that with the right knowledge and approach, technology is the answer to the challenges that business face today. We have been bringing this knowledge and approach to our customers since 2016, helping them translate technology into their success.
With Swiss roots and our own development team in Lima and throughout the region, we offer the best of both cultural areas: the talent and passion of Latin American professionals combined with Swiss organization skills and mindset.
< class="h3">Job DescriptionBertoni Solutions is looking for a Senior Azure Data Engineer to work 100% remotely from Latin America. We hope for great experience, fluent English and high interest in working remotely for US clients.
< class="h3">Qualifications- 8 years of experience
- Microsoft Azure: Databricks, ADF, ADLS, Synapse, etc.
- PySpark
- Spark
- Experience with Azure Data Factory
- Experience in EDW design with Synapse
- Strong SQL knowledge
- Desirable: Scala
- Advanced written and conversational English

< class="h3">Company Description

CoEnterprise is an award-winning B2B software and professional services company headquartered in New York City. Founded in 2010, CoEnterprise delivers Supply Chain and Business Analytics solutions and services that transform how companies connect and do business. CoEnterprise approaches each relationship and engagement from the perspective of three core values: collaboration, ownership, and excellence. We value collaboration with both our partners and clients in order to present the best possible outcome for our customers. Our vow to accept ownership ensures that our entire staff takes pride in our work and it is our commitment to excellence that ensures that this work is at the highest standard possible.
< class="h3">Job DescriptionResponsibilities
- Elicit, understand and restate complex business challenges related to realizing an organization’s advanced analytics data strategy
- Define and visualize data architecture patterns for analytics solutions, modeling both AS-IS and TO-BE data architecture topologies for both on-prem and the cloud
- Identify, propose and justify data warehousing, data modeling and analytics architectures for BI, data science, ad-hoc query analysis, data sharing and application development
- Synthesize customer analytics challenges into solutions for the Snowflake Data Cloud
- Establish confidence in recommendations via product expertise, custom product demonstrations, technical phone calls, RFP/RFI responses, product roadmap discussions, architectural topology options & business process diagrams
- Articulate and support selected recommendations consultatively and with a business-minded awareness of advancing a deal
- Partner with sales to deliver on revenue plan
- Demonstrate and advise in our core Analytics platforms including:
- Snowflake
- Tableau Desktop, Alteryx, Dataiku or equivalent BI tools
- Alteryx and/or related data preparation tools
- Demonstrate, advise and make solution recommendations similar to the following tools and skillsets:
- DataRobot, Dataiku, Databricks, Spark and related data science tools
- Python, ML/AI technology options and best practices
- The current ecosystem of data warehousing solutions including those on Azure, AWS and GCP
- API integrations
- Communicate and champion the methods, processes, and certifications which make CoEnterprise a leading provider of Cloud Analytics Services
- Engage with both internal teams and customers in a consultative and approachable manner
- Design and deliver presentation materials within established content and style parameters
Professional Skills
- Proficient in delivering software demonstrations in-person and virtually
- Proven experience working with employees at all levels of an organization
- Comfortable developing and presenting solutions
- Experience creating technical business documentation like workflow diagrams, proposals, SOWs, RFPs and RFIs, etc.
- Structured and methodical approach to creating and maintaining notes, deliverables, statements of work and other work artifacts in accordance with team standards
- Strong verbal and written communication skills
- Comfortable prioritizing and managing multiple, often competing, workstreams effectively.
- Must be a continually curious, committed, and efficient learner of new business and technology skills, highly responsive to emerging sales requirements
Other
- Willingness to travel 45% or more as needed
- 3+ years' prior experience within a mid-market or Enterprise level consulting, delivering, or selling SaaS solutions and concepts
- 3+ years building analytics solutions in the cloud, including design and delivery of data lakes, data warehouses and data marts
- 2+ years working with the Snowflake Data Cloud
- Advanced SQL skills
- Proficient coding skills in at least one of the following: Python, JavaScript, R or other data science language
- Demonstrable experience with Enterprise-class Analytics software systems like Tableau, Alteryx, and Snowflake
- Familiarity implementing solutions in at least two of the following cloud providers: AWS, Azure, Google Cloud, and IBM Cloud
- Familiarity w/system integration methods such as web services, SOAP APIs & REST APIs
- Familiarity with Advanced Analytics Applications
Come experience our spirited culture and work with a smart, dedicated and high-energy team in a stable and fast-growing company! Here is a small sample of our benefits and perks we offer:
- Comprehensive Health Insurance with generous employer contribution
- Matching 401(k) - $$$$
- Generous PTO Policy
- Virtual Team Lunches
- Wellness Program
- Monthly Mingles
- Birthday Celebrations
- Virtual Events- Happy Hours, Casino Night, Magic Show, Scavenger Hunt of National History Museum, Game Nights and more
At CoEnterprise, we believe ersity drives innovation. We are committed to creating and maintaining a workplace in which all employees have an opportunity to participate and contribute to the success of our business. In recruiting for our team, we welcome the unique contributions that you can bring. We value employees for their differences represented by a variety of dimensions including demographics, behaviors, work style and perspectives.
We are an AA/EOE employer.


location: remoteus
Administrative Coordinator (Remote)
Job Locations: US-Remote
Requisition ID: 2022-78712
# of Openings: 1
Job Function: Clinical
Job Schedule: Regular Full-Time
Job Introduction
Maximus is currently hiring for an Administrative Coordinator. In this position, you will be performing administrative tasks to ensure compliance with all contract level requirements. This is a fully remote position with a salary range of $17-22/houly, pay is based on overall experience and qualifications.
Job Description Summary
Perform administrative tasks to ensure compliance with all contract level requirements
Job Summary
Essential Duties and Responsibilities:
- Provide customer support to internal and external customers
- Responsible for assigning and coordinating referrals for contract work to appropriate parties
- Computer data entry
- Perform all job duties in compliance with Person First standards, HIPAA guidelines, and company confidentiality policies and procedures.
- Complete assignments within established compliance standards and timelines
- Monitor multiple work queues daily to ensure cases move quickly through each process stage.
- Identify and resolve data errors
- Performs other related duties as assigned.
- Excellent written and verbal communication skills
- Excellent interpersonal and customer service skills
- Proficient in Microsoft Office Suite
- Excellent organizational skills and attention to detail
- Ability to work in a fast-paced environment
- Ability to work independently
Minimum Requirements:
- High School Degree or equivalent and 0-2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
Education and Experience Requirements
Minimum Requirements:
- High School Degree or equivalent and 2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
- Self-starter with sense of urgency and the ability to work in fast-paced, complex, and deadline-driven environment
- Strong organizational skills including time management, calendar management, scheduling, project management, records and filing and using digital resources
- Ability to complete assignments with attention to detail and a high degree of accuracy
- Strong interpersonal skills including tact, diplomacy and flexibility to work effectively with all members of the organization
- Ability to work as a team member, as well as independently
- Demonstrated ability to communicate information clearly and accurately both verbally and in writing
- Ability to exercise judgment and discretion with highly sensitive and confidential information
- Proficient with Microsoft Office (intermediate level in Excel, PowerPoint, Sharepoint, and Microsoft Outlook)
MAXIMUS Introduction
Since 1975, Maximus has operated under its founding mission of Helping Government Serve the People, enabling citizens around the globe to successfully engage with their governments at all levels and across a variety of health and human services programs. Maximus delivers innovative business process management and technology solutions that contribute to improved outcomes for citizens and higher levels of productivity, accuracy, accountability and efficiency of government-sponsored programs. With more than 30,000 employees worldwide, Maximus is a proud partner to government agencies in the United States, Australia, Canada, Saudi Arabia, Singapore and the United Kingdom. For more information, visit https://www.maximus.com.
EEO Statement
EEO Statement: Active military service members, their spouses, and veteran candidates often embody the core competencies Maximus deems essential, and bring a resiliency and dependability that greatly enhances our workforce. We recognize your unique skills and experiences, and want to provide you with a career path that allows you to continue making a difference for our country. We’re proud of our connections to organizations dedicated to serving veterans and their families. If you are transitioning from military to civilian life, have prior service, are a retired veteran or a member of the National Guard or Reserves, or a spouse of an active military service member, we have challenging and rewarding career opportunities available for you. A committed and erse workforce is our most important resource. Maximus is an Affirmative Action/Equal Opportunity Employer. Maximus provides equal employment opportunities to all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disabled status.
Pay Transparency
Maximus compensation is based on various factors including but not limited to a candidate’s education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus’s total compensation package. Other rewards may include shortand long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation shall be commensurate with job duties and relevant work experience. An applicant’s salary history will not be used in determining compensation.
What we are looking for –
A passionate, hungry, and motivated inidual that is eager for a chance to join a young startup, experiencing rapid growth. At OmniData, we are searching for a remote Senior Azure Data Engineer that has experience working on data warehousing and analytics projects, a strong technical aptitude, and the ability to provide direction for clients on the best strategies for their analytics goals. In return, we offer deep mentorship, a great work/life balance, and the opportunity to be part of creating a consulting firm that makes a difference for our clients!
What you will do –
You will work on various Big Data, Data Warehouse and Analytics projects for our world class customers. In addressing complex client needs, you will be integrated into appropriately sized and skilled teams. This will give you the opportunity to analyze requirements, develop data and analytical solutions, and execute as part of the project team, all while working with the latest tools, such as Azure Synapse Analytics and related Microsoft technologies.
Your Duties and Responsibilities –
- Contribute collaboratively to team meetings using your experience base to further the cause of innovating for OmniData clients.
- Instill confidence in the client as well as your teammates
- Work independently toward client success, at the same time knowing your own limitations and when to call on others for help.
What you must have to be considered –
- 2-3+ years of experience in Analytics and Data Warehousing on the Microsoft platform
- 2-3+ years working with Microsoft SQL Server
- Experience working with the Microsoft Azure stack (e.g. Synapse, Databricks, Data Factory, etc.)
What would be nice for you to have –
- Experience with Python
- Experience gathering requirements and working within various project delivery methodologies
- Experienced working as a customer facing consultant
- Exposure to DAX
- Strong communication skills tying together technologies and architectures to business results
- Some travel may be required (up to 20%) Post COVID 19
Benefits and Perks –
- Competitive salary and benefits commensurate with experience
- Mentorship from highly regarded industry specialists
- Exposure to the latest and greatest Microsoft technologies
- High growth potential for those with an entrepreneurial spirit.
About OmniData –
OmniData is a Portland based Data and Analytics consulting firm leveraging the Microsoft technology stack to help organizations build their Modern Data Estates, designed to serve their digital innovation needs for many years to come. To do this, we apply deep experience in Solution Architecture, Data, Analytics, and Technology to simplify the complex.
OmniData is offering you the opportunity to work with the entire lifecycle of large Data Projects, focused on next generation data warehousing, with surface points to Analytics, Machine Learning and AI. We offer a collaborative work culture, that enables you to produce client results with a safety net from your team. You will get to work closely with very experienced consultants who will be able to provide mentorship and career guidance. At the same time, you will be rewarded for learning fast and executing within our teams to provide solutions for OmniData clients
OmniData Is An Equal Opportunity Employer And All Qualified Applicants Will Receive Consideration For Employment Without Regard To Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected By Law.

Ness Digital Engineering, provides strategic IT consulting to global enterprises. Our DevOps and Infrastructure practice provides solutions, methodologies, and strategic guidance for digital transformation, containerization, and automation. Our Financial Services team offers strong domain expertise and technology acumen to deliver feature-focused solutions in Capital Markets.
We solve complex business problems with technology and insight. Our business domain knowledge, technology expertise, and Agile delivery process have delivered seamless Digital Transformations at some of the largest customers globally. We’re an AWS Premier Consulting Partner, a Premier Confluent Systems Integrator and a Snowflake Select Services Partner.
As a Data Engineer you will:
- Serve as an expert technologist in implementing ETL data pipelines, streaming data solutions, data lakes and data warehouses
- Work on the architecture, design, implementation, and testing of advanced data solutions for Ness clients
- Exhibit expertise in data modeling, data warehouses, data lakes, and building ETL data pipelines
- Modernize our clients’ data platforms, transitioning to cloud-hosted solutions using AWS, Azure, GCP, and Snowflake
Requirements
- Strong relational database skills including SQL, data modeling, and
- Experience using ETL / ELT tools such as EMR, Fivetran, Informatica
- Experience with data collection, data cleansing, and ETL processes
- UNIX/Linux skills including shell scripts and basic system administration
- Programming skills using modern programming languages like Python, Java, or JavaScript
- Cloud experience strongly preferred using AWS (Redshift, RDS, EMR, Glue) or Azure (Synapse Analytics, Azure SQL)
- Experience with data visualization tools such as PowerBI or Tableau
- Excellent verbal and written communication skills
- Ability to manage multiple projects simultaneously
Additional Desired Skills:
- Programming languages including Java, Scala, C++, C#, JavaScript, R
- Experience with software test automation
- Data warehouse technologies such as Snowflake, AWS Redshift, and/or Azure Synapse Analytics
- Experience with AWS ML technologies such as SageMaker
- Experience with big data analytics tools such as Spark or DataBricks
- Experience with streaming data analytics, Kafka, and/or Kinesis Streams
Education and Certification Requirements:
- An undergraduate degree is usually required, preferably in a STEM discipline.
- AWS or Azure certifications desirable (Solution Architect, Machine Learning or Big Data specialty certification
- Snowflake SnowPro certification desirable
Benefits
- Flexible work environment with a globally distributed team
- Competitive compensation packages including performance bonuses
- Paid vacation and sick time off
- Employer-subsidized medical, dental, and vision insurance
- Company-paid short- and long-term disability insurance
- A culture of cooperation and support
- Continual professional and personal development through employer-paid training and certifications

About the project
Spate is the machine intelligence platform used by top industry beauty brands. We analyze over 20 billion search signals to spot the next big beauty trend and help brands with their marketing/product development strategies.
As we expand to new Asian markets (Japan), we are looking to hire a Data Operations Manager to help build and manage our expanding datasets. In this role, you will be in charge of overseeing the Spate data expansion and entry process. This role will require a strong eye for detail, and a strong passion for organization and project management. We would be looking at 10 to 20 hours a week (we can be flexible depending on your availability).
Responsibilities
- Oversee data process and quality assurance for each vertical/market
- Manage relationships with the data entry team
- Analyze datasets and investigate discrepancies or inconsistencies
- Curate interesting and unique trends for Spate content; brainstorm compelling topic ideas for upcoming reports
Requirements
Minimum qualifications
- 1-3 years of experience
- Exceptional verbal and written skills
- Meticulous and organized, with a high level of attention to detail
- Proven problem-solving skills using deductive reasoning, understanding hierarchical relationships, and identifying gaps in logic
- Demonstrated project management skills and ability to manage multiple priorities
- Self-starter and ability to work independently
Preferred qualifications
- Experience in SEA/SEM and SEO
- Or Experience in CRM
- Or Experience in Copywriting
Benefits
About Spate
At Spate, we use data science to predict the next big consumer trend in beauty, personal care & food.
Spate was founded in 2018 by Yarden Horwitz & Olivier Zimmer, two ex-Googlers who led the trendspotting ision at Google and uncovered trends such as turmeric, cold brew, and face masks. Spate has been funded by the prestigious Y Combinator incubator and Initialized Capital. We currently have ~90 clients in the U.S., mainly in the beauty space from direct-to-consumer brands to big names such as L’Oréal, Estée Lauder, Unilever...
As two ex-Googlers with a passion for using data to spot new patterns in consumer behavior, and we have made it our mission to build the world’s greatest consumer trends prediction platform of all time. And not just because we want to be trendy, but because we want to help brands get better at giving consumers what they really want.
Brands waste over $200BN every year due to product launch failures and inventory waste. By spotting Turmeric, we were able to tell brands to stop wasting money on kale products and provide consumers with glorious golden milk lattes instead - because that’s what consumers want.
How do we do this? We tap into publicly available consumer data (anonymous and aggregated) to identify interesting shifts in consumer behavior. We leverage the latest available technology in ML to solve problems in ways that have never been explored before.
Why Spate?
- Join a well-funded company that is working with the top brands in consumer goods
- Work directly with the founders to set the direction of the company
- Grow in a fast-paced environment
- Always be up-to-date on the latest trends!
We enjoy a casual atmosphere, but our culture is about getting things done. We are passionate yet pragmatic when it comes to solving problems in a fast-paced environment. Our standards are high, but we thrive on working with people we respect and can learn from. We’re flexible on work styles, as long as everyone is getting their work done - and getting it done well.
We are an equal opportunity employer where our ersity and inclusion are central pillars of our company strategy. We look for applicants who understand, embrace, and thrive in a multicultural and increasingly globalized world. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Here at Hugging Face, we’re on a journey to advance good Machine Learning and make it more accessible. Along the way, we contribute to the development of technology for the better.
We have built the fastest-growing, open-source, library of pre-trained models in the world. With over 100M+ installs and 65K+ stars on GitHub, over 10 thousand companies are using HF technology in production, including leading AI organizations such as Google, Elastic, Salesforce, Algolia, and Grammarly.
About the Role
As a data engineer for vision datasets, you will work on a 3-6 months project to catalyze progress in computer vision for the open-source and research community.
The project will deal with:
- analyzing publicly available vision datasets,
- providing better access to selected datasets within the 🤗 Datasets library,
- improving vision data pre-and post-processing features within the 🤗 Datasets library,
- evaluating state-of-the-art computer vision systems on a variety of vision/image datasets.
During your project, you will closely work with the vision community. The goal is to catalyze research in computer vision by making image preprocessing as easy as possible for as many datasets as possible, as well as providing reproducible baselines for state-of-the-art computer vision systems and empowering the vision community to improve current dataset documentation practices.
About you
You'll love this internship if you are passionate about current trends in computer vision and view sharing your work with the research community as a necessity.
You should be well-versed in Python, have some experience in image preprocessing, and not be (too) afraid to process multiple terabytes of image data on a daily basis. Experience with some tabular data libraries, e.g. Apache Arrow, as well as open-source contributions and the ability to communicate feature requests to a erse open-source community are a plus! It is advantageous if you are comfortable working remotely as most of our collaborations are conducted in a remote setting.
We encourage students enrolled in university (Ph.D., Master, or Bachelor), data scientists, and ML/Data engineers looking for new opportunities to apply for this internship.
More about Hugging Face
We are actively working to build a culture that values ersity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community, as well as the future of machine learning more broadly. Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status.
We value development. You will work with some of the smartest people in our industry. We are an organization that has a bias for impact and is always challenging ourselves to grow continuously. We provide all employees with reimbursement for relevant conferences, training, and education.
We care about your well-being. We offer flexible working hours and remote options. We offer health, dental, and vision benefits for employees and their dependents. We also offer parental leave and unlimited paid time off.
We support our employees wherever they are. While we have office spaces in NYC and Paris, we're very distributed, and all remote employees have the opportunity to visit our offices. If needed, we'll also outfit your workstation to ensure you succeed.
We want our teammates to be shareholders. All employees have company equity as part of their compensation package. If we succeed in becoming a category-defining platform in machine learning and artificial intelligence, everyone enjoys the upside.


datadata
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👋 A little about us

GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.
GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Analyst, you will:
- collaborate with stakeholders (Go-To-Market and Product) to define KPIs and success metrics to maximize Company and Team level performance
- discover and explain trends across data sources, potential opportunities for growth or improvement
- being a top contributor to our data warehouse using Dbt and BigQuery
- being able to perform advanced product analysis using Amplitude
- design and develop comprehensive dashboards to provide self-serve analytics solutions to different stakeholders across the business
- create trainings and documentation to ensure adoption of our data solutions and recommendations
- shaping the dynamics and processes of the Data team, including our onboarding, hiring, and team goals
- having a full understanding of the data stack
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to get insights from product analytics data
- your ability to jump into any GitBook members shoes to understand their needs
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🌍 Location
Remote
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 Join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)

< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👋 A little about us

GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Engineer, you will be involved in:
- own our data stack to maintain and improve it
- bring best practices from software engineering to empower the data team
- being a top contributor to our growing data warehouse using Dbt and BigQuery
- being interested in solving business questions more than setting infrastructure
- empower everyone in the company by making data available to them
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:
- your ability to build and maintain any data layer, as you have past experience in the data enginery field
- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)

Who We Are
TetraScience is the Scientific Data Cloud company with a mission to accelerate scientific discovery and improve and extend human life. The Scientific Data Cloud is the only open, cloud-native platform purpose-built for science that connects lab instruments, informatics software, and data apps across the biopharma value chain and delivers the foundation of harmonized, actionable scientific data necessary to transform raw data into accelerated and improved scientific outcomes. Through the Tetra Partner Network, market-leading vendors access the power of our cloud to help customers maximize the value of their data.
What You Will Do
- Own, prototype, and implement customer solutions
- Research and prototype data acquisition strategy for scientific lab instrumentation
- Research and prototype file parsers for instrument output files (.xlsx, .pdf, .txt, .raw, .fid, many other vendor binaries)
- Design and build data models
- Design and build Python data pipelines, unit tests, integration tests, and utility functions
- Work with the customer to test and make sure the solution fulfills their requirements and solves their need
- Coordinate project kickoff meetings; manage the customer relationship throughout the project, and conduct formal project closeout meetings
- Facilitate internal project post-mortems to identify areas of improvement on the next implementation
Requirements
What You Have Done
- 2+ years in Python and SQL
- Passionate about science and building solutions to make the data more accessible to the end-users
- Undergraduate or graduate degree in chemistry, biology, computer science, statistics, public health, etc.
- Wet lab experience or experience with scientific instruments is a strong plus
- Excellent communications skills, attention to details, and the confidence to take control of project delivery
- Quickly understand a highly technical product and effectively communicate with product management and engineering
- Strong problem-solving skills
- Intellectually curious: Unwavering drive to learn and know more every day
- Ability to think creatively on how to solve projects risks without reducing quality
- Team player and ability to "roll up your sleeves" and do what it takes to make the team successful
Benefits
- 100% employer-paid benefits for all eligible employees and immediate family members
- Unlimited paid time off (PTO)
- 401K
- Flexible working arrangements - Remote work + office as needed
- Company paid Life Insurance, LTD/STD
No visa sponsorship is available for this position

Research shows that women and underrepresented groups only apply to jobs only if they think they meet 100% of the qualifications on a job description. IMO is committed to considering all candidates even if you don’t think you meet 100% of the qualifications listed. We look forward to receiving your application!
Work that is meaningful. A job that has impact. Colleagues that inspire. That’s what you’ll find at Intelligent Medical Objects (IMO), a growing health IT company creating clinical terminology and insights solutions that are used by more than 740,000 US physicians and 4,500 US hospitals to power better patient care and support meaningful analytics.
The Business Analyst will be responsible for the end-to-end process related to his or her features, including developing functional and non-functional business requirements, understanding how requirements impact both the technical
and functional nature of an application and collaborating with engineering to build features according to business needs. This BA/PO is responsible for prioritizing the work of the software engineering team to ensure the most valued functionality. This role will require someone who understands technology and business, a high degree of organization and attention to detail, and the ability to professionally interact with a erse blend of personalities to reach resolution and maintain strong relationships.
< class="h3">Responsibilities

- As the Business Analyst, lead an agile team to deliver forecasted sprint goals, and solve problems efficiently and completely, according to principals of scrum development.
- Work closely with stakeholders to create and maintain a product backlog according to business value or ROI.
- Lead team sprints and road mapping processes
- Assess value, develop cases, and prioritize stories, epics, and themes to ensure work focuses on those with maximum value that are aligned with product strategy
- Provide vision and direction to the Agile development team and stakeholders.
- Keep abreast with Agile/Scrum best practices and new trends
- Experience owning a product delivery cycle and delivering software solutions.
- Manage and respond promptly and professionally to defect reports. Aid support personnel as needed to determine system problems.
- Take responsibility for and lead new initiatives in content accuracy and quality.
- Focus the team towards utilizing standards/metrics that provide guidance and feedback.
- Oversee current state process capture, identify, recommend and implement process re-engineering to support automation.
- Responsible for delivering ad hoc projects to support leadership, working with the product managers on strategic initiatives.
- Ensure that all content released to IMO Clients meets our high standards and expectations.
- Acquire working knowledge of IMO terminology solutions and related technology tools.
- At least 3 years of software Business Analyst working with an Agile team or equivalent experience.
- Proficient in gathering business requirements, process flows and use cases.
- In-depth knowledge of Agile process and principles.
- Proficiency in Microsoft Office applications and experience with JIRA/Confluence/Aha! Software a plus.
- Outstanding communication, presentation, and leadership skills.
- Sharp analytical and problem-solving skills.
- Excellent planning, organizational, and time management skills.

The Last Mile (TLM) is looking for a Senior Manager of Research and Analytics dedicated to supporting TLM's mission through organizational learning and growth. This person will be responsible for the development of TLM's research and evaluation processes and projects across in-prison and reentry programs. The Senior Manager of Research and Analytics will ensure that a consistent culture of reflection and learning exists throughout the organization to inform the current and future iterations of our programming.
< class="h2">Responsibilities:- Lead all efforts in designing and managing our internal program performance measurement work
- Analyze all indicators and outcomes to help leadership team make appropriate strategic and programmatic decisions
- Work with department leads to ensure that all data is correctly gathered and inputted on a regular schedule by all programming staff
- Implement any other necessary evaluation methods as appropriate (e.g., personal development assessment surveys, focus groups, photo journaling, etc.)
- Research best practice implications based on evaluation findings
- Develop partnerships with external evaluators (e.g., university research partners) and other organizations that can help advance our evaluation and research capacities, serving as the liaison with such collaborators
- Conduct research and help develop recommendations related to various policy opportunities aimed at improving outcomes for justice impacted folks
- Translate and disseminate evaluation results (e.g., writing policy briefs and publications) to help shift policy discussions based on promising and best practices revealed
- Lead the process of creating and facilitating necessary trainings for staff regarding data collection and management.
- Prior background in evaluation work required
- Experience designing and implementing Randomized Control Trials or other large-scale research projects preferred
- MA or equivalent work experience in research-related field
- Experience with Salesforce administration preferred
- Excellent oral and writing skills
- Strong organizational skills and demonstrated high-level strategic thinking and planning
- Embrace challenges and new responsibilities with creativity, initiative, and self-direction
- Experience in justice reform or non-profit settings preferred
- Empathy, passion and understanding of TLM's mission, pillars, and community interests

Since its founding, Intuition Machines has been on the forefront of innovation, leveraging and developing new technologies to solve complex problems. Our team, composed of leading researchers and developers, are constantly innovating toward an improved future fueled by the promise of privacy, security, and performance. We work in a casual and fast-paced environment, with a team distributed around the world, hundreds of millions of users, and a rapidly growing customer base and product suite.
Join us as we transform security and machine learning online.
As a Lead Data Engineer you will be responsible for technical leadership of data engineering projects. You will do that by designing and improving high throughput data pipelines, promoting best practices in terms of high performance data processing, infrastructure setup and development process. You will have the ability to shape the data engineering capabilities for state of the art, large scale security and machine learning products.
< class="h3">What will you do:
- Lead the data engineering initiatives and projects
- Design, document and build scalable data infrastructure
- Collaborate with software engineers, ML engineers, product managers and growth teams
- Set up data quality, monitoring and alerting infrastructure
- Ensure high performance and availability of our data infrastructure
- Proven experience in designing and implementing end to end data solutions (+5 years of experience)
- Strong python programming skills with emphasis on clean, readable and testable code
- Experience with high throughput data systems and streaming architectures
- Experience working with Kafka infrastructure and applications
- Solid understanding of OLAP databases (preferably Clickhouse)
- Hands on experience with Kubernetes
- Familiarity with public cloud providers (AWS or Azure)
- Familiarity with security frameworks, attack vectors, botnets
- Experience working with IaC and GitOps solutions
- Experience with monitoring, observability and data quality tools
- Experience with exploratory data analysis and data science solutions
- Ability to work with cutting-edge technology
- Fully remote position with flexible working hours #Li-remote
- An inspiring team spread all over the world
- A unique chance of being a part of #hCaptcha revolution

< class='"content-intro"'>
< class="h3">About us:

RevenueCat makes building, analyzing and growing mobile subscriptions easy. We launched as part of Y Combinator's summer 2018 batch and today are handling more than $1.2B of in-app purchases annually across thousands of apps.
We are a mission driven, remote-first company that is building the standard for mobile subscription infrastructure. Top apps like VSCO, Notion, and ClassDojo count on RevenueCat to power their subscriptions at scale.
Our 50 team members (and growing!) are located all over the world, from San Francisco to Madrid to Taipei. We're a close-knit, product-driven team, and we strive to live our core values: Customer Obsession, Always Be Shipping, Own It, and Balance.
We’re looking for a Staff Data Engineer to join our newly formed data engineering team. As a Staff Engineer, you will be responsible for leading the effort to design, architect and support our entire data platform and will play a key role in defining how our systems evolve as we scale.
< class="h3">About you:- You have 8+ years of software engineering experience.
- You have 5+ years of experience working with and building enterprise-scale data platforms.
- You have excellent command of at least one of the mainstream programming languages and some experience with Python.
- You have helped define the architecture, data modeling, tooling, and strategy for a large-scale data processing system, data lakes or warehouses.
- You have used workflow management tools (eg: airflow, glue) and have experience maintaining the infrastructure that supports these.
- You have hands-on experience building CDC-based (Change Data Capture) ingestion pipelines for highly transactional databases. Experience with Postgres and logical replication is a plus.
- You have a strong understanding of modern data processing paradigms and tooling, OLTP & OLAP database fundamentals.
- Dimensional modeling and reporting tools like Looker are a plus, but not required
- You have experience building streaming/real-time data pipelines from a batch architecture approach.
- Help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time
- Help the team apply software engineering best practices to our data pipelines (testing, data quality, etc)
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, using SQL and AWS technologies
- Clearly define data ownership & responsibility, audit and compliance framework, and general security of the data lake
- Partner with product managers, data scientists, and engineers across teams to solve problems that require data
- Drive the evolution of our data platform to support our data processing needs and provide frameworks and services for operating on the data
- Analyze, debug and maintain critical data pipelines
- Work with our core infrastructure team to create and improve frameworks that allow derived data to be used in production environments
- Contribute to standards that improve developer workflows, recommend best practices, and help mentor junior engineers on the team to grow their technical expertise
- Get up to speed on our architecture and learn the problem domain
- Understand our current data requirements and where things stand today
- Gain understanding of our current data pipelines
- Work with your team to help design and architect our data platform
- Work with product managers, engineers and data scientists to help come up with a plan to gain consensus on the approach
- Analyze, debug and maintain critical data pipelines
- Develop thorough understanding of our data platform
- Know all the major components of our system and be able to debug complex issues
- Be able detect bottlenecks, profile, and come up with enhancements
- Start participating in hiring for the company
- Thoroughly understand our data processing needs and able to spec, architect, and build solutions accordingly
- Mentor other engineers joining the team
- $218,000 to $245,000 USD salary regardless of your location
- Competitive equity in a fast-growing, Series B startup backed by top tier investors including Y Combinator
- 10 year window to exercise vested equity options
- Fully remote work environment that promotes autonomy and flexibility
- Suggested 4 to 5 weeks time off to recharge and focus on mental, physical, and emotional health
- $2,000 USD to build your personal workspace
- $1,000 USD annual stipend for your continuous learning and growth

Data Entry – Quality Assurance Specialist
Location: US National – Virtual
Full-Time
The Data Entry-Quality Assurance Specialist in our Customer Boarding Department is responsible for reviewing merchant customer data in multiple databases to identify any inconsistencies that need to be corrected for newly boarded accounts. Success in this role requires a strong attention to detail while working in a fast paced environment.
A Quality Assurance Specialist in our Customer Boarding Department is responsible for maintaining data integrity while adhering to company policies and practices. This position is accountable for completing detailed data analysis of newly boarded customer accounts and identifying inconsistencies in company CRM that need to be corrected. Success in this role requires an energetic, solution solving inidual with strong attention to detail while consistently meeting service levels. This position is key in building customer loyalty and ensuring revenue integrity.
What You’ll do:
- Review newly approved merchant account details to ensure and maintain data integrity within NAB systems, Global systems and/or First Data systems
- Verify that key data information is accurate in all systems matching the merchant application
- Identify and execute corrections for inaccurate information timely and accurately
- Daily support of the Customer Boarding call queue
- At a high level, manage and organize productivity through the effective use of all available resources including database systems and query reports
- Provide positive customer experiences while maintaining a high degree of ethical behavior in all aspects of daily busines
- Build and maintain strong working relationships with all NAB and TMS employees and departments by keeping a positive attitude and a collaborative focus in all interactions
- Adhere to company policies as defined
- Accept and complete assignment with open, cooperative, positive, and team oriented attitudes
- Perform special projects as assigned
What we Need from you:
- High School Diploma or G.E.D.
- Payment industry experience preferred
- Strong attention to detail
- Ability to communicate feedback, information, and directions both verbally and written
- The ability to work at speed and with accuracy
- Excellent organizational skills, multi-tasking and prioritization in a fast-paced work environment
- Thorough understanding of rates and fees preferred
- Strong customer service skills
- Flexibility in work schedule to accommodate business needs
- Ability to problem solve and de-escalate upset customers
- Inspiring and positive attitude
< class='"content-intro"'>

At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
Data Scientist
Health IQ has set upon itself to completely change the way seniors choose their healthcare plans by using AI/ data science, and world class user experience to bring transparency, objectivity, and intelligence to the insurance purchasing process. We need a Data Scientist to help this initiative end to end and establish Health IQ as a leader in the digital insurance market.
At Health IQ, Data Scientist uses predictive analytics and innovative machine learning models to create value from data. This role is at the heart of finding and proving innovative solutions and is responsible for developing and driving strategic modeling initiatives while maintaining a close partnership with IT to ensure that our models can be deployed quickly and monitored in a flexible deployment framework.
As a Data Scientist, you will serve as a technical and thought leader on this erse and highly skilled team. You will design and develop inventive solutions to drive innovation and the delivery of organizational value. You’ll synthesize large datasets and solve complex problems by using advanced machine learning and statistical modeling. You’ll work in a highly collaborative, team environment, guiding and mentoring junior data scientists and collaborating with multiple stakeholders. You will assist management in the communication of insights and the implementation of impactful data science solutions across the organization.
You will deliver actionable insights from your models that can be incorporated into existing Health IQ products and new programs. The ideal candidate for this role will have a passion for creating solutions, an attitude of creativity, and continual learning.
What you will be doing:
- Build core analytical models that drive Health IQ digital insurance products.
- Bring core domain expertise about Medicare, health insurance and population insights.
- Communicate complex quantitative analyses in a clear, precise, and actionable manner to management and executive-level audiences while building relationships with their partners
- Collaborate with business leaders to understand business opportunities and formulate analytical solutions for problem-solving, working alongside other analytic iniduals and team
- Design innovative algorithms and machine-learning approaches for handling some of the most challenging and exciting datasets in today’s insurance industry
- Provide thought leadership on the practical application of machine learning and advanced analytical methods and cultivate a data-driven culture across the company
- Deliver clean, reusable, and scalable code
- Work closely with Data & Engineering to deploy models
What we’re looking for:
- Master’s Degree in computer science, Math, Statistics, Economics or in any technical field that provides a solid basis for analytics is required. Masters with relevant experience acceptable
- 2+ years of experience in data science, statistics, computer science, or mathematics where you designed, developed, evaluated, and deployed predictive modeling, machine learning, and advanced analytics
- End to end experience from data wrangling to model deployment delivering added value with varying levels of ambiguity
- Extensive experience solving analytical problems using quantitative and qualitative approaches especially related to Medicare, healthcare insurance plans, and/ or senior focused population insights.
- Experience with state-of-the-art techniques in machine learning algorithms, including deep neural networks, NLP, dimensionality reduction, ensemble methods, graph algorithms
- Excellent communication skills and experience in working with stakeholders
- Strong prioritization skills while being dynamic and agile
- Ability to advise one or more areas, programs, or functions
To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
< class='"content-conclusion"'>To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
- Career Growth
As a rapidly growing company, new opportunities for growth and development continue to become available. We believe in promoting from within, and look to reward high performing employees with new opportunities.
- Celebration
We believe the key is to celebrate those who have improved their health rather than cajole those who haven’t. We look for employees who take this positive and optimistic view in their work lives.
- Service to Seniors
Our whole mission and vision is to serve seniors to improve their health. We want employees who believe true happiness comes from being in service to others. We call these employees Health Heroes.
- Personal Responsibility
We believe that only you can make the decision to improve your own health and no one else can do this for you. We look for employees that tend to do the same.
- Excellent benefits
Competitive rates for our employees' costs toward medical, dental and vision insurance. We offer a 401K, and pay 100% of your life insurance benefit option! We also offer various Flexible Spending Account (FSA) benefits to meet you and/or your families needs. Only full-time employees are eligible for benefits.
- Join a Remote-first Culture
Our flexible, totally remote environment allows us to hire top talent throughout the U.S. The world has changed, and we’ve learned that being in an office is no longer the best way for our employees and our company to thrive.

We’re building a better health system! At Nice, we’re making healthcare accessible by delivering integrated primary, musculoskeletal, and mental health care to patients when they want it through a combination of in-home and virtual visits while also improving the quality of care by eliminating the complexity, poor management, and time constraints that hold clinicians captive.
Building a better health system for all requires the input and perspectives of all. Nice actively seeks a mixture of beliefs, backgrounds, education, and point of view to help us drive better, more informed design and business decisions. Nice is committed to building a erse, inclusive, and equitable workforce and we diligently provide equal employment opportunities for all applicants and employees.
Product
The Product team at Nice Healthcare has an exciting opportunity for a Staff Data Analyst. In this role, you’ll wrangle our data, nurture our data analysis program, and partner with leaders to interpret and apply data that will drive us forward. As a Product team - made up of product managers, designers, researchers, and data analysts - we believe in succeeding as a team and setting measurable goals to guide our work together. We will cultivate your career through the investment of time and materials in the discovery of your career path.
What you’ve done before:
- You have significant experience with healthcare data and tools, including medical records, claims datasets, ICD-10 codes, CPT codes, and groupers (more than 3 years)
- You are familiar with the role of data on product teams, and in a startup or midsize business
- You have strong expertise in at least Python or R, as well as SQL
- You excel at your role by seeking to understand the business and its users
- You want to work on a wide range of problems and questions
What you’ll do at Nice:
- Work with medical record and claims datasets to accurately gage the efficacy of various care models and interventions
- Become the caretaker of our existing datasets - keep them tidy and easy to leverage
- Transform and mine our data in support of key questions and decision points
- Nurture the development and iteration of our KPIs and other important metrics
- Collaborate with our internal engineers to capture data thoroughly and thoughtfully
- Build clean and informative dashboards and visualizations
- Spend time learning from and alongside other leaders at Nice
- Proactively derive and communicate insights to stakeholders
- Support key product decisions by investigating and representing data throughout the software development lifecycle
- Define best practices around how we capture data, how we organize it and how we use it
- Ensure the privacy and security of our data in collaboration with IT leadership
- Consult on methods of data collection and experimentation
- Actively mentor other analysts on the team
- Support the direction and prioritization of other analysts’ work (pulling reports, visualization, monthly reporting preparation).
What Nice offers you:
- 100% remote work environment -- work from anywhere in the U.S.
- Company paid Medical, Dental, Vision and Life Insurance
- Competitive salary
- 25 Days of PTO – that we actively encourage you to use
- Two "No Meeting" days every week
- Growth and development opportunities
- Personal enrichment & wellness stipend
- 401k with a 3% employer contribution
- Personal and family use of Nice Healthcare (in eligible cities)
- The nicest team members and work environment
- And much more!
About Nice Healthcare
Nice Healthcare is a technology-enabled full-service primary care clinic without a physical location that treats our patients in the comfort of their homes with in-person visits or online video calls.
We foster an open and supportive company culture that values the input and ideas of all team members no matter their role. We are an innovative company in that we are revolutionizing the way patients receive primary care services and we don’t settle for the status quo - we are always implementing new processes and technology to make our work more efficient and productive.
We are committed to building a workforce that is erse and inclusive. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.


location: remoteus
Title: Data Entry Specialist
Location: United States
JOB SNAPSHOT
- Employee Type: Full-Time
- Location: Work From Home
JOB DESCRIPTION
- Remote positions available – temporary
- $17.00 per hour
Founded in 1980, MultiPlan is the industry’s most comprehensive provider of healthcare cost management solutions. We provide the most comprehensive portfolio of cost management solutions; helping payers manage the cost of care.
We are seeking multiple temporary associates for an Intake position.
- This is a temporary position which is expected to last 60 to 120 days.
- The expected start date is August 16, 2022.
- A training class is provided during the first week of employment. New employees will need to work 8:00 to 4:30 pm CT during the training.
- After training, there is flexibility in the work schedule
Responsibilities:
- Intake and create cases in all applicable systems.
- Perform timely data entry of necessary information
- Research appropriate systems to identify data needed to complete cases.
- Ensure compliance with HIPAA regulations and requirements.
- Demonstrate Company’s Core Competencies and values held within
- Please note due to the exposure of PHI sensitive data – this role is considered to be a High Risk Role.
- The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary.
JOB REQUIREMENTS
- High School diploma or equivalent.
- Ability to efficiently use a keyboard and quickly navigate software applications.
- High speed internet access.
- Quiet work area without distractions.
- Regular and consistent attendance and adherence to work schedule.
- Knowledge of medical insurance terminology preferred.
- Communication skills (verbal, written, listening).
- Ability to work without frequent supervision.
- Ability to maintain confidentiality in all required situations.
- Ability to use software, hardware, and peripherals related to job responsibilities.
Our client's Database Engineers are relied on to build the future of our direct-to-home service delivery platform. As a part of our Engineering department, based in beautiful Provo, UT, this role requires the ability to move quickly, think deeply and work well with others on your Agile Scrum team: engineers, UX, product owners, and stakeholders. As a Senior Database Engineer, you will engineer, code, and test resilient, highly-scalable database systems that support both our web applications as well as backend APIs for our mobile apps servicing our hundreds of thousands of customers. We are significantly building out the engineering teams at Aptive and are seeking talented coders who love growth to be a part of this expansion.
Responsibilities include:
● Work experience in writing complex SQL queries in MySQL and in building stored procedures and views.● Manage different databases through multiple product lifecycle environments, from development to mission-critical production systems.● Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security.● Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations.● Independently analyze, solve, and correct issues in real-time, providing problem resolution end-to-end.● Refine and automate regular processes, track issues, and document changes● Assist developers with complex query tuning and schema refinement.● Build ER diagrams and help develop and refine DB standards, code reviews, and release processes.● Provide support for critical production systems.● Perform scheduled maintenance and support release deployment activities after hours if required.● Capacity Planning and delivering robust and scalable databases.● Backup & Restore databases, ensuring a well-defined and tested disaster recovery strategy.● Share domain and technical expertise, providing technical mentorship and cross-training to other peers and team members.● Solve technical problems by working closely with Agile scrum masters, UX, product managers, and remote teams.● Review and correct code for quality and design.● Design high-performance database schema and code architecture.● Empower remote teams to deliver stable, high-performing, and reliable code. This requires regular interactions with remote teams in the mornings.● Contribute significantly to sprints, meet sprint deadlines, help other team members with their sprint commitments and take the lead on urgent tasks.Required Qualifications:
● 7+ years of experience writing SQL queries and performance tuning.● Experience required in designing, modeling, and implementing database DDL and DML.● Working knowledge of database and architecture best practices.● Strong written and verbal communication skills.
Benefits:
● Medical, Dental, and Vision Benefits.● Group Health, Dental, and Vision plans.● Paid holidays.● Paid time off.● Access to a full-sized indoor basketball court, game room with theater, pool table, golf simulator, and more.● Upbeat and exciting company culture and much more!
We are a US software development company delivering high-quality, cost-effective custom application development to clients worldwide. As a technology consulting company, we also help our clients with their digital transformation process.
Currently, we are seeking a Teach Data Lead:
What You Will Be Doing:- Identifying data sources, both internal and external and working out a plan for data management that is aligned with organizational data strategy.
- Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems.
- Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy.
- Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system.
- Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution.
- Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution.
Your Profile Includes:
- Knowledge of the following data tools: Airflow, Postgre Aurora, Fivetran.
- Experience working with Python, AWS and Apple Search Ads.
- Experience generating date file into internal format using Data Pipeline Infrastructure.
- Ability to implement common data management and reporting technologies, as well as the basics of columnar and NoSQL databases, data visualization, unstructured data, and predictive analytics.
- Understanding of predictive modeling, NLP and text analysis, Machine Learning (Desirable).
Work Breakdown
- Ingestion: Implement the data pipeline from Fivetran source (Postgres Aurora) to internal file generation.
- Load: Implement the data pipeline from the generated internal file during ingestion to loading into the client environment's datastore. Below are the files that need to be loaded.
- Data source tables and related infrastructure preparation.
- Feature implemented as per requirements and as per Engineering Excellence guidelines.
- The implementation must follow documented playbook for integrating media sources through Fivetran.
- All code must pass CICD pipeline, including python linting, black formatting, and 100% test coverage for functional code using patterns.

Your work makes the difference between a user seeing a bus drive away or reaching it just in time. We feel a great sense of responsibility at Citymapper. Millions of users around the world trust our green app to be on time for work, a job interview or a date.
As one of our Data Analysts you will build the timetable data that powers the Citymapper app and help launch new cities. You'll use secret magic (and our internal tools) to transform often messy transit schedules into shiny and reliable user information whilst building, validating, and shipping data directly to users on a daily basis.
Working with our engineers to improve tools and automate tasks, you'll manage data in a range of French and European cities across different formats.
This role is designed for French-speaking contractors based anywhere in the world, but with an obsessive knowledge and adoration for public transport networks.
We are a erse team of transport enthusiasts from all around the world with extensive language skills, who are not afraid to get our hands dirty with transit data. Join us for the ride!Requirements
The position is open to applicants with all levels of experience as we'll teach you the technical skills to succeed. You'll need:
- A true passion for public transport and cities, good understanding of how public transport networks operate.
- A technical mindset, comfortable dealing with data, willingness to learn new data skills.
- A hands-on, proactive, practical, pragmatic attitude.
- An exceptional attention to detail with good organisational skills.
- Some familiarity with common transit data formats or data-wrangling is a plus (GTFS, TransXChange, Siri, Hafas, JSON, XML etc)
- French and English are a must, any other European language a plus.
Benefits
- Contractor position in a remote-first team.
- Working on something interesting and meaningful - helping to make cities usable.
- Working with a not-too-big, erse engineering team.
- Arcane public transport knowledge with which to dazzle your friends.
We believe that erse teams are the best teams and we're proud to be an equal opportunities employer. We welcome and will consider all applications regardless of age, disability, gender re-assignment, marriage, pregnancy, maternity, race or nationality, religion or belief, sex and sexual orientation (and any other status protected by applicable law)

Hi!
We are Genesis Growth Accelerator. We are building a unique model of working with promising B2C IT products: we invest in projects at early stages, scale-up, and help to build successful companies that serve millions worldwide.
Over 100 mln people across the world have already used our products and many more are yet to come.
Our mission is to transform Ukraine from an outsourced hub into a product state. The state, where ideas are born, developed, and owned from the first $1 of revenue to a unicorn IPO.
We are now looking for a Business Data Analyst, who will increase the capitalization of businesses at an early stage.
RESPONSIBILITIES:- Hypothesis generation to create, scale and optimize business growth levers across a wide product range;
- Planning, coordination, and analysis of marketing and product A/B tests;
- Support in the development of analytical solutions for mobile and web products across Genesis Growth Accelerator.
HARD SKILLS:
- Knowledge of SQL/Excel;
- Knowledge of Python (NumPy, Pandas, Matplotlib / Seaborn, functional & OOP principles);
- Understanding the concept (architecture) of DWH class systems;
- Confident application of key statistical and probability theory concepts;
- Basic understanding of key product metrics for mobile applications;
- Advanced level English knowledge.
WOULD BE A PLUS:
- Experience with Tableau/PowerBI;
- Experience with REST API and HTTP API;
- Experience with Google Cloud Platform (Cloud Storage, BigQuery);
- Knowledge of mobile marketing intelligence products including their capabilities and limitations (SimilarWeb, Sensor Tower, Appannie, etc.).
SOFT SKILLS:
- Optimism. You can't wait to see the results of the 10th hypothesis test after the previous 9 have failed;
- Communication. You don't have a problem with telling stories starting from the end.
Join our team of dreamers, doers, and global changemakers!

Updated about 3 years ago
RSS
More Categories