Fractal is hiring Imagineer (Hiring Women in Tech) (Batch: 2025)
Home > Jobs > fractal-is-hiring-imagineer-hiring-women-in-tech-batch-2025
Fractal is hiring Imagineer (Hiring Women in Tech) (Batch: 2025)
Off-Campus program: Imagineer
Imagineer would go through a dynamic program offering them a variety of projects with exposure to business value chain. This will give them an opportunity to experiment within our 4 career tracks namely –Data Visualization, Decision Science, Data Science & Data Engineer for scaled problem solving.
Eligibility Criteria:
2025 year of passing
7 CGPA (or equivalent) & above across all academics for B.Tech & MCA.
6.5 CGPA (or equivalent) & above across all academics for BSc/BCA grads.
We will disburse 50k from retention bonus with the 25th month's salary. The remaining 50k will be tied to their first two promotions (25k each) within Imagineer program tenure, based on their performance. If early promotions are not achieved, the 50k linked to the promotions will not be disbursed.
Compensation for B.Sc./BCA: Total 3-year compensation: INR 21 lac
We will disburse 50k from retention bonus with the 25th month's salary. The remaining 50k will be tied to their first two promotions (25k each) within Imagineer program tenure, based on their performance. If early promotions are not achieved, the 50k linked to the promotions will not be disbursed.
As a part of the program, you will undergo preboarding program that focuses on building breadth of knowledge in all aspects of data to make you Fractal ready. Your performance in this program and business need will determine your career track (Information Architecture, Data Engineering, Data Science & Decision Sciences). Generic understanding of your role as an Imagineer in each track is as follows:
Data Visualization (Information Architecture): Deliver insight, innovation, and impact to our Fortune 500 clients through predictive analytics and visual storytelling.
•Understand business requirements and address them with conceptualizing and designing of solutions
•Learn and deploy BI solutions with minimal guidance (post training)
•Drive insights from data
•Deliver impact with automations & innovations
•End to end ownership of delivery
•Rigorous documentation & drive operational efficiency with guidance
Data Engineering: Work with a team that helps deliver our Data Engineering offerings at a large scale to our Fortune 500 clients worldwide.
•Learn and implement the best of engineering principles to build data pipelines to drive analytical insights leveraging Cloud native stacks and Cloud agnostic stacks (All of cloud services for each of the clouds, Open-source stack like Python, Pyspark etc.)
•Build data pipelines for both structured and unstructured data for real-time and batch-oriented systems
•Build composition-based data solutions leveraging Services oriented and Microservices architecture
•Build real-time solutions leveraging Digital and IOT data
•Build Lakehouses and Warehouses
•Develop next generation warehouse solutions using Knowledge graphs
Data Science: the capability to work on independent statistical and machine learning research/ projects
•Ability to understand a problem statement and implement analytical solutions & techniques independently
•Conduct research and prototyping innovations; data and requirements gathering; solution scoping and architecture; consulting clients and clients facing teams on advanced statistical and machine learning problems.
•Collaborate and Coordinate with different functional teams (engineering and product development) to implement models and monitor outcomes.
Decision Sciences:
•Communicating with the clients and understand their business needs
•Work with large, complex data sets to solve difficult, non-routine analysis problems, applying advanced analytical methods as needed
•Build data pipelines, and dashboards for marketers to self-serve data needs; enable automated tracking of marketing campaigns
•Harmonize data as required and setup models according to the business logic defined
•Use statistical tools to identify and evaluate the relationships between data fields, analyse output and determine the next action