Untitled is seeking a data engineer to join our growing team. The ideal candidate is equally a data wrangler and data pipeline builder. We are looking for an individual who can work in a variety of environments, and can effectively execute on complex and diverse problems.
A data engineer at Untitled will be responsible for assisting in implementations of data pipelines in other companies’ environments. The candidate would have some familiarity of working in a consulting capacity for companies, and have cut their teeth in digital transformations or data migrations for organizations in the past.
It is paramount that an Untitled data engineer has relevant experience in data science, business analytics and statistics. They must be able to approach each unique problem with a holistic view of the organizations we are working with, and have the ability to prescribe optimal solutions and stacks for each company. Our goal is that this data engineer can grow into a team lead that guides our analysts and data scientists on efficient and automated paths to success.
- Build and support data-pipelines in Untitled clients’ environments
- Assemble large complex data sets
- Build infrastructure for optimal extraction, transformation, and loading from a wide variety of data sources and environments. Must be able to automate ETL tasks and data-jobs at enterprise scale
- Connect and support data sources that flow into data warehouses
- Connect and support data warehouses that flow into business analytics tools such as PowerBI and Tableau
- Work alongside Data Scientists and Analysts to assist in wrangling, assembling, cleaning, and loading data into workable environments and analytics tools
- Must be able to accurately scope projects end-to-end, and have flexibility regarding solution implementations given client budget constraints, time constraints, and organizational bandwidth
- Must be able to communicate with non-technical team members, in order to facilitate accurate project tracking and management of client/stakeholder expectations
- Specialization: Generalist or Pipeline-centric
- Programming Languages: SQL, Python, Java, Go, Scala, C++, R
- Skills: Data wrangling from APIs, ETL scripting and automation, database design, deep understanding of Information Architecture, experience in a multitude of data warehouses and environments
- Environments: AWS, Azure, Google BigQuery, Salesforce, Anaconda, Apache Frameworks, Jupyter Notebooks
- AWS Cloud: S3, Redshift, Athena, EC2, RDS, Glacier, EMR
- Database Expertise: both SQL and NoSQL databases
- 2-5 years professional experience. Undergrad degree preferred but NOT required.
- Experience in Analytics Platforms: PowerBI, Tableau, Periscope Data
- Machine Learning production-level implementations
If you believe you qualify for the above job description, please contact Untitled. We look forward to reviewing your application.