Deloitte is hiring experience candidate for the post of Senior Consultant. Details are given below..
Deloitte is a leading global provider of audit and assurance, consulting, financial advisory, risk advisory, tax, and related services. With more than 150 years of hard work and commitment to making a real difference, our organization has grown in scale and diversity—approximately 286,000 people in 150 countries and territories, providing these services—yet our shared culture remains the same. Our organization serves four out of five Fortune Global 500® companies.
“Deloitte” is the brand under which tens of thousands of dedicated professionals in independent firms throughout the world collaborate to provide audit, consulting, financial advisory, risk advisory, tax and related services to select clients.
Deloitte has approximately 286,000 professionals at member firms delivering services in audit & assurance, tax, consulting, financial advisory, risk advisory, and related services in more than 150 countries and territories. Revenues for fiscal year 2018 were US$43.2 billion. Learn more about Deloitte in the 2018 Deloitte Touche Tohmatsu Limited Global Report.
Job Title: Consulting-Business Operations- GCP Data engineer
Post: Senior Consultant
Job Description:
Deloitte is looking for experienced GCP practitioners to join its’ cloud engineering practice. Here you will join a team delivering transformative cloud hosted data platform for some of the world’s biggest organizations. The ideal candidate we seek, needs to have a proven track record in implementing data ingestion and transformation pipelines on cloud-based platforms preferably Google Cloud Platform. Deep technical skills and experience with working on multiple components of cloud would be required to support with developing prototypes of a solution and in their subsequent industrialization.
You will also be required to participate in stakeholder management, highlight risks, propose deliver plans and estimate for time and team size based on requirements. Hence, adequate levels of communication skills and relevant experience in handling such situations is desired
Job Responsibilities:
• Designing and implementing, highly performant data ingestion pipelines from multiple sources using Apache Spark , Hadoop, Hive using Cloud Dataproc
• Delivering and presenting proofs of concept to of key technology components to project stakeholders.
• Developing scalable and re-usable frameworks for ingesting and enriching datasets
• Strong Data Modeling (Dimensional Modeling) Skills and knowledge of ELT processes
• Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
• Working with batch and event based / streaming technologies to ingest and process data
• Working with other members of the project team to support delivery of additional project components (API interfaces)
• Evaluating the performance and applicability of multiple tools against customer requirements
• Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
• Assist in sprint and release planning, coordination with team to ensure delivery on time maintaining the required quality
Educational Qualification: B.E. / B.Tech
Required Skills:
• Strong knowledge of Data Management principles
• Experience in building ETL / data warehouse transformation processes
• Proven experience of building data pipelines using Cloud DataProc, Cloud DataFlow
• Experience with Cloud DataStore, block and blob storages
• Experience in building conceptual and physical data models, applying normalization and denormalization techniques, designing tabular models
• Experience using Apache Spark and associated design and development patterns
GCP Certification preferred
• Knowledge of IAM (Identity Access management) on Google Cloud
• Hands on experience designing and delivering solutions using Cloud DataProc, DataPrep, DataFusion, Cloud DataFlow
• Good command over SQL and working knowledge of Cloud BigQuery, Cloud Bigtable.
• Experience with other Open Source big data products Hadoop (incl. Hive, Pig, Impala)
• Experience working in a Dev/Ops environment with tools such as DevOps Services, Terraform etc.
Remuneration : ₹ 3,00,000 - 8,00,000 PA
Experience: 2-8 years
Job Location: Bangalore (India)
Interested candidates apply through the mentioned link.
Headquarters
27-32, Tower 3, One International Center,, Mumbai City, Maharashtra, India