Share People Hub
Share People Hub in partnership with a global company, is searching for a Data Engineer to work remotely. This position is excluvely for people with disabilities (PwD)
About the Role
A global company is seeking an experienced data analyst to support the consumer behavior tracking project. Prior experience with Databricks and large databases. In addition, the ideal candidate will have experience working with end-to-end implementation projects and dealing with product development teams.
What you’ll do
- Collaborate with team members to analyze complex datasets, develop and implement cutting-edge machine learning and deep learning algorithms for portfolio optimization, recommendations, personalization, promos, segmentation, imputation, and insights discovery.
- Stay up to date with the latest advancements in machine learning and deep learning algorithms, ensuring a strong technical foundation to solve complex business problems. Apply frontier algorithms to drive innovation across the team’s projects.
- Utilize cloud platforms like Azure, Databricks, and Spark to efficiently process and analyze large-scale datasets. Leverage the full potential of these platforms to deliver scalable and performant solutions.
- Utilize Python/PySpark for data manipulation, analysis, and modeling tasks. Implement automation workflows to streamline processes and enhance productivity using tools and frameworks.
- Collaborate closely with cross-functional teams, including data engineers, business stakeholders, and product managers, to understand project requirements and deliver high-quality solutions. Embrace an agile development approach to iterate quickly and efficient
What you’ll need
- Bachelor’s degree in computer science, engineering, mathematics, or any quantitative field. A master’s degree is a plus.
- Strong technical skills and deep understanding of frontier algorithms in machine learning and deep learning. Familiarity with techniques for portfolio optimization, recommendations, personalization, promos, segmentation, imputation, and insights discovery.
- Proficiency in machine learning frameworks such as TensorFlow or PyTorch. Experience in implementing and deploying models using these frameworks.
- Experience working with cloud platforms such as Azure, Databricks, and Spark for big data processing and analysis.
- Proficiency in Python/PySpark for data manipulation, analysis, and modeling tasks. Strong knowledge of relevant libraries and frameworks.
- Good knowledge of CI/CD tools like Github for version control and collaboration. Familiarity with other collaboration tools is a plus.
- Understanding of workflows and automation tools to streamline processes and enhance efficiency.
It’s great if you have
- Experience in optmization
What We Offer:
- Performance based bonus*
- Attendance bonus*
- Private pension plan
- Meal Allowance
- Casual office and dress code
- Days off*
- Health, dental, and life insurance
- Medicines discounts
- Wellhub partnership
- Childcare subsidies
- Discounts on Ambev products*
- Club Ben partnership
- Scholarship*
- School materials assurance
- Language and training platforms
- Transport allowance
*Rules applied
Equal Opportunity & Affirmative Action:
We are proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon of race, color, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other applicable legally protected characteristics.