Greens Technologys phone
Greens Technologys Whatsapp

Big Data Training in Chennai


Bigdata Training in chennaiLearn Big Data Training in Chennai at Greens Technologies – No 1 Big Data Training Institute in Chennai. Call 89399-15577 for more details.


Greens Technologys offers Big Data training in Chennai with Real-World Solutions from Experienced Professionals on Hadoop 2.7, Yarn, MapReduce, HDFS, Pig, Impala, HBase, Flume, Apache Spark and prepares you for Cloudera’s CCA175 Big data certification.




About The Trainer

- Imran has been working with data for more than 20 years.

Imran is a Chief Data Scientist who works with Amazon Web Services and Google Cloud Platform, specializing in Hadoop development.
As a data scientist, he is skilled in optimizing queries and processing large data sets.

Imran specializes in big data projects.
Imran has worked with AWS Athena, Aurora, Redshift, Kinesis, and the IoT.
He has also done production work with Databricks for Apache Spark and Google Cloud Dataproc, Bigtable, BigQuery, and Cloud Spanner.

In his current role at Amazon, he is working with the Hadoop development team.
He specializes in writing and deploying data processing improvements.
His accomplishments include programming enhanced metadata processing for A/B testing, optimizing jobs on a 1,000+ node cluster, and creating a distributed fault injection platform.

He has spoken on data and cloud technologies in North and South America, Europe, Africa, Asia, and Australia.

Flexible Timings / Weekend classes Available.

Talk to the Trainer @ +91-8939915577

About Big Data Training in Chennai


Best Bigdata Training in chennai

This Bigdata training in Chennai is designed to make you a certified Big Data practitioner by providing you rich hands-on training on Hadoop ecosystem and best practices about HDFS, MapReduce, HBase, Hive, Pig, Oozie, Sqoop.

This Bigdata training course is stepping stone to your Big Data journey and you will get the opportunity to work on a Big data Analytics project after selecting a data-set of your choice. You will get Hadoop certification after the project completion.


Who should go for this Big Data Training Course?


Market for Big Data analytics is growing across the world and this strong growth pattern translates into a great opportunity for all the IT Professionals.
Here are the few Professional IT groups, who are continuously enjoying the benefits moving into Big data domain:

  • Developers and Architects
  • BI /ETL/DW professionals
  • Senior IT Professionals
  • Testing professionals
  • Mainframe professionals
  • Freshers

Big Data Training for Beginners (Starts from Basic Java Training)


This Big Data Training course covers all the fundamentals about Java and teaches you everything you need to know about big data. Gain skills in Hadoop, MapReduce, Cassandra, Apache Spark, and MongoDB, Hadoop Administration, Apache HBase Fundamentals, Spark Kafka, Flink, Data Science, Spark, Scala and Storm combo with unique Big Data training course content.

We provide the BEST Big Data Training in Adyar, OMR, Tambaram, Porur and Annanagar helps you to get certified in Big Data Professional, Certified Big Data Scientist, Certified Big Data Science Professional.



Big Data Training and Placement in Chennai


Rated as No 1 Big Data training institute in Chennai for Assured Placements. Our Job Oriented Big Data training courses in chennai are taught by experienced certified professionals with extensive real-world experience. All our Best Big Data training in Chennai focuses on practical than theory model.


Placements we provide under our Big Data Training in Chennai Adyar and OMR.

  • Big Data Training and Placement in Chennai
  • Big Data Hadoop Training and Placement in Chennai
  • Big Data Analytics Training and Placement in Chennai

Big Data Corporate Training in Chennai


Make your team data smart. Customized online and onsite courses in Analytics, Data Science, Machine Learning, Big Data training in Chennai for your employees.


Cross-training your data scientists, data architects, big data developers, analysts and administrators also prepares you to optimize across the entire data value chain. Our instructors are recognized as the India's top experts in the area of big data and analytics and have access to industry data sets that are incorporated into each course.


Big Data Training courses in Chennai


  • Big Data Certification Training
  • Hadoop Project based Training
  • Apache Spark Certification Training
  • Hadoop Administration
  • NoSQL Databases for Big Data
  • CCA175 - Cloudera Spark and Hadoop Developer Certification
  • Spark, Scala and Storm combo
  • Apache Kafka
  • Apache Storm Introduction
  • Apache Hadoop and MapReduce Essentials
  • Apache Spark Advanced Topics
  • Hadoop Interview Preparation - Questions and Answers

The Big Data Hadoop Certification course is designed to give you in-depth knowledge of the Big Data framework using Hadoop and Spark, including HDFS, YARN, and MapReduce. You will learn to use Pig, Hive, and Impala to process and analyze large datasets stored in the HDFS, and use Sqoop and Flume for data ingestion with our big data training.

You will master real-time data processing using Spark, including functional programming in Spark, implementing Spark applications, understanding parallel processing in Spark, and using Spark RDD optimization techniques. With our big data course, you will also learn the various interactive algorithms in Spark and use Spark SQL for creating, transforming, and querying data forms.

As a part of the big data course, you will be required to execute real-life industry-based projects using CloudLab in the domains of banking, telecommunication, social media, insurance, and e-commerce. This Big Data Hadoop training course will prepare you for the Cloudera CCA175 big data certification.


Big Data Courses, Certification & Training content  


  • 1 About Hadoop Training
  • 2 Hadoop Training Course Prerequisites
  • 3 Hardware and Software Requirements
  • 4 Hadoop Training Course Duration
  • 5 Hadoop Course Content


  • Hadoop Training Course Prerequisites


    • Basic Unix Commands
    • Core Java (OOPS Concepts, Collections , Exceptions ) — For Map-Reduce Programming
    • SQL Query knowledge – For Hive Queries

    Hardware and Software Requirements


    • Any Linux flavor OS (Ex: Ubuntu/Cent OS/Fedora/RedHat Linux) with 4 GB RAM (minimum), 100 GB HDD
    • Java 1.6+
    • Open-SSH server & client
    • MYSQL Database
    • Eclipse IDE
    • VMWare (To use Linux OS along with Windows OS)

    Hadoop Training Course Duration


    • 70 Hours, daily 1:30 Hours

    Hadoop Course Content


    Introduction to Hadoop


    • High Availability
    • Scaling
    • Advantages and Challenges

    Introduction to Big Data


    • What is Big data
    • Big Data opportunities
    • Big Data Challenges
    • Characteristics of Big data

    Introduction to Hadoop


    • Hadoop Distributed File System
    • Comparing Hadoop & SQL.
    • Industries using Hadoop.
    • Data Locality.
    • Hadoop Architecture.
    • Map Reduce & HDFS.
    • Using the Hadoop single node image (Clone).

    The Hadoop Distributed File System (HDFS)


    • HDFS Design & Concepts
    • Blocks, Name nodes and Data nodes
    • HDFS High-Availability and HDFS Federation.
    • Hadoop DFS The Command-Line Interface
    • Basic File System Operations
    • Anatomy of File Read
    • Anatomy of File Write
    • Block Placement Policy and Modes
    • More detailed explanation about Configuration files.
    • Metadata, FS image, Edit log, Secondary Name Node and Safe Mode.
    • How to add New Data Node dynamically.
    • How to decommission a Data Node dynamically (Without stopping cluster).
    • FSCK Utility. (Block report).
    • How to override default configuration at system level and Programming level.
    • HDFS Federation.
    • ZOOKEEPER Leader Election Algorithm.
    • Exercise and small use case on HDFS.

    Map Reduce


    • Functional Programming Basics.
    • Map and Reduce Basics
    • How Map Reduce Works
    • Anatomy of a Map Reduce Job Run
    • Legacy Architecture ->Job Submission, Job Initialization, Task Assignment, Task Execution, Progress and Status Updates
    • Job Completion, Failures
    • Shuffling and Sorting
    • Splits, Record reader, Partition, Types of partitions & Combiner
    • Optimization Techniques -> Speculative Execution, JVM Reuse and No. Slots.
    • Types of Schedulers and Counters.
    • Comparisons between Old and New API at code and Architecture Level.
    • Getting the data from RDBMS into HDFS using Custom data types.
    • Distributed Cache and Hadoop Streaming (Python, Ruby and R).
    • YARN.
    • Sequential Files and Map Files.
    • Enabling Compression Codec’s.
    • Map side Join with distributed Cache.
    • Types of I/O Formats: Multiple outputs, NLINEinputformat.
    • Handling small files using CombineFileInputFormat.

    Map/Reduce Programming – Java Programming


    • Hands on “Word Count” in Map/Reduce in standalone and Pseudo distribution Mode.
    • Sorting files using Hadoop Configuration API discussion
    • Emulating “grep” for searching inside a file in Hadoop
    • DBInput Format
    • Job Dependency API discussion
    • Input Format API discussion
    • Input Split API discussion
    • Custom Data type creation in Hadoop.

    NOSQL


    • ACID in RDBMS and BASE in NoSQL.
    • CAP Theorem and Types of Consistency.
    • Types of NoSQL Databases in detail.
    • Columnar Databases in Detail (HBASE and CASSANDRA).
    • TTL, Bloom Filters and Compensation.

    HBase


    • HBase Installation
    • HBase concepts
    • HBase Data Model and Comparison between RDBMS and NOSQL.
    • Master & Region Servers.
    • HBase Operations (DDL and DML) through Shell and Programming and HBase Architecture.
    • Catalog Tables.
    • Block Cache and sharding.
    • SPLITS.
    • DATA Modeling (Sequential, Salted, Promoted and Random Keys).
    • JAVA API’s and Rest Interface.
    • Client Side Buffering and Process 1 million records using Client side Buffering.
    • HBASE Counters.
    • Enabling Replication and HBASE RAW Scans.
    • HBASE Filters.
    • Bulk Loading and Coprocessors (Endpoints and Observers with programs).
    • Real world use case consisting of HDFS,MR and HBASE.

    Hive


    • Installation
    • Introduction and Architecture.
    • Hive Services, Hive Shell, Hive Server and Hive Web Interface (HWI)
    • Meta store
    • Hive QL
    • OLTP vs. OLAP
    • Working with Tables.
    • Primitive data types and complex data types.
    • Working with Partitions.
    • User Defined Functions
    • Hive Bucketed Tables and Sampling.
    • External partitioned tables, Map the data to the partition in the table, Writing the output of one query to another table, Multiple inserts
    • Dynamic Partition
    • Differences between ORDER BY, DISTRIBUTE BY and SORT BY.
    • Bucketing and Sorted Bucketing with Dynamic partition.
    • RC File.
    • INDEXES and VIEWS.
    • MAPSIDE JOINS.
    • Compression on hive tables and Migrating Hive tables.
    • Dynamic substation of Hive and Different ways of running Hive
    • How to enable Update in HIVE.
    • Log Analysis on Hive.
    • Access HBASE tables using Hive.
    • Hands on Exercises

    Pig


    • Installation
    • Execution Types
    • Grunt Shell
    • Pig Latin
    • Data Processing
    • Schema on read
    • Primitive data types and complex data types.
    • Tuple schema, BAG Schema and MAP Schema.
    • Loading and Storing
    • Filtering
    • Grouping & Joining
    • Debugging commands (Illustrate and Explain).
    • Validations in PIG.
    • Type casting in PIG.
    • Working with Functions
    • User Defined Functions
    • Types of JOINS in pig and Replicated Join in detail.
    • SPLITS and Multiquery execution.
    • Error Handling, FLATTEN and ORDER BY.
    • Parameter Substitution.
    • Nested For Each.
    • User Defined Functions, Dynamic Invokers and Macros.
    • How to access HBASE using PIG.
    • How to Load and Write JSON DATA using PIG.
    • Piggy Bank.
    • Hands on Exercises

    SQOOP


    • Installation
    • Import Data.(Full table, Only Subset, Target Directory, protecting Password, file format other than CSV,Compressing,Control Parallelism, All tables Import)
    • Incremental Import(Import only New data, Last Imported data, storing Password in Metastore, Sharing Metastore between Sqoop Clients)
    • Free Form Query Import
    • Export data to RDBMS,HIVE and HBASE
    • Hands on Exercises.

    HCATALOG.


    • Installation.
    • Introduction to HCATALOG.
    • About Hcatalog with PIG,HIVE and MR.
    • Hands on Exercises.

    FLUME


    • Installation
    • Introduction to Flume
    • Flume Agents: Sources, Channels and Sinks
    • Log User information using Java program in to HDFS using LOG4J and Avro Source
    • Log User information using Java program in to HDFS using Tail Source
    • Log User information using Java program in to HBASE using LOG4J and Avro Source
    • Log User information using Java program in to HBASE using Tail Source
    • Flume Commands
    • Use case of Flume: Flume the data from twitter in to HDFS and HBASE. Do some analysis using HIVE and PIG

    More Ecosystems


    • HUE.(Hortonworks and Cloudera).

    Oozie


    • Workflow (Action, Start, Action, End, Kill, Join and Fork), Schedulers, Coordinators and Bundles.
    • Workflow to show how to schedule Sqoop Job, Hive, MR and PIG.
    • Real world Use case which will find the top websites used by users of certain ages and will be scheduled to run for every one hour.
    • Zoo Keeper
    • HBASE Integration with HIVE and PIG.
    • Phoenix
    • Proof of concept (POC).

    SPARK


    • Overview
    • Linking with Spark
    • Initializing Spark
    • Using the Shell
    • Resilient Distributed Datasets (RDDs)
    • Parallelized Collections
    • External Datasets
    • RDD Operations
    • Basics, Passing Functions to Spark
    • Working with Key-Value Pairs
    • Transformations
    • Actions
    • RDD Persistence
    • Which Storage Level to Choose?
    • Removing Data
    • Shared Variables
    • Broadcast Variables
    • Accumulators
    • Deploying to a Cluster
    • Unit Testing
    • Migrating from pre-1.0 Versions of Spark
    • Where to Go from Here

Big Data Certification Training in Chennai


Big Data Certification Training in Chennai will enable you to master the concepts of the Hadoop framework and its deployment in a cluster environment. You will learn to:

Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark with this Hadoop course.

  • Understand Hadoop Distributed File System (HDFS) and YARN architecture, and learn how to work with them for storage and resource management
  • Understand MapReduce and its characteristics and assimilate advanced MapReduce concepts
  • Ingest data using Sqoop and Flume
  • Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
  • Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
  • Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
  • Understand and work with HBase, its architecture and data storage, and learn the difference between HBase and RDBMS
  • Gain a working knowledge of Pig and its components
  • Do functional programming in Spark, and implement and build Spark applications
  • Understand resilient distribution datasets (RDD) in detail
  • Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
  • Understand the common use cases of Spark and various interactive algorithms
  • Learn Spark SQL, creating, transforming, and querying data frames
  • Prepare for Cloudera CCA175 Big Data certification



Course advisor


iot training chennai

Named by Onalytica as one of the three most influential people in Big Data, Also an author for a number of leading Big Data and Data Science websites, including Datafloq, Data Science Central, and The Guardian. She also regularly speaks at renowned events.


What is a Big Data?


The term ‘Big Data’ refers to extremely large data sets, structured or unstructured, that are so complex that they need more sophisticated processing systems than the traditional data processing application software.

It can also refer to the process of using predictive analytics, user behavior analytics or other advanced data analysis technology to extract value from a data set.
Big Data is often used in businesses or government agencies to find trends and patterns, that can help them strategic decisions or spot a certain pattern or trend among the masses.


share training and course content with friends and students:

  • Big Data Training and Placement in Chennai.
  • Ranked BEST Big Data Training Center in Chennai.
  • BEST Big Data Hadoop Training and Certification.
  • Top BEST Institute for Big Data in Chennai.
  • No.1 Big Data Training in Adyar.
  • Best Hadoop Training in Chennai, OMR.
  • Reviews across the Internet proves – we are the BEST Big Data Hadoop Training.
  • Our Training Institute also ranked the BEST Big Data Training in India.
  • Big Data Training Chennai
  • Big Data Training in Chennai
  • Big Data Training in Chennai Adyar
  • Big Data Training center Chennai
  • Big Data Training realtime course with frnds
  • Big Data online training best institute
  • Big Data course greens technologys
  • best Big Data Training in Chennai
  • Big Data Training tutorial
  • Big Data Training chennai


Big Data training in Chennai Reviews


Greens Technology Reviews given by our students already completed the training with us. Please give your feedback as well if you are a student.


Big Data training in Chennai Reviews from our Students


iot training chennai

Dear Karthik! This e-mail is to say BIG THANK YOU..for all teaching you done in our Big Data training sessions. I GOT JOB as Big Data Developer after almost 2 months of struggle here in Chennai. I must Thank you for such a good and rocking lessons. to tell you frankly you made me to like/love/crazy about R though i have no idea about it before joining your classes." This is my first job in IT after my studies and i am a bit tensed how things will be after joining in the company. your suggestions are more helpful for me to get on well in the company as good developer.



Best Big Data Certification Training Syllabus


iot training chennai

I attended the Base R and Advanced Big Data course class room sessions. The outline of the each course were well prepared and presented using latest video technology. The instructor is very talented and expert on Analytics concepts both theoretically and practically. I would highly recommend this institute to any one who wants to learn Big Data ." I joined "Greens Technology" because of their proven expertise in R practical training. Here, I learnt the Magic of Big Data . The constant and personal interaction with the Trainer, Live Projects, Certification Training and Study material are the best part. The trainers are extremely proficient in their knowledge and understanding of the topics. The instructors I had were both skillful and possessed the knowledge required to present the material to the classes. The R Certification training program has provided me with the necessary skill sets to prepare me for the corporate world. "Greens Technology" is the stepping stone to my success in the IT world. The money invested is well worth the reward. On my personal experience I recommend "Greens Technology" heart fully as the best training institute for IT Business Intelligence education. Thank you "Greens Technology" for helping me achieve my dream of becoming an Big Data Certified Professional.



Best Big Data Training center in Chennai


iot training chennai



"The course delivery certainly is much better than what I expected. I am glad that I decided to choose Greens Technology for the Big Data course. Wonderful learning experience and I like the way classes are organized and good support staff. Greens Technology provides quality learning experience within affordable price. Also thanks to my educator Dinesh , his teaching inspires and motivates to learn..


Best Big Data Training and Placement In Chennai


iot training chennai

"Friends I am from Manual testing background having 6+ years experienced. I planned to Move into R Business Intelligence (BI) . I Came to know about Greens technologies and Sai who is working in CTS . They Really helped me to clear the interview. Thanks to Sai Sir. Knowledgeable Presenters, Professional Materials, Excellent Support" what else can a person ask for when acquiring a new skill or knowledge to enhance their career. Greens Technology true to its name is the place to gather,garner and garden the knowledge for all around the globe. My Best wishes to Greens Technology team for their upcoming bright future in E-Learning sector.


R Training Venue:

Are you located in any of these areas - Adyar, Mylapore, Nandanam, Nanganallur, Nungambakkam, OMR, Pallikaranai, Perungudi, Ambattur, Aminjikarai, Adambakkam, Anna Nagar, Anna Salai, Ashok Nagar, Besant Nagar, Choolaimedu, Chromepet, Medavakkam, Porur, Saidapet, Sholinganallur, St. Thomas Mount, T. Nagar, Tambaram, Teynampet, Thiruvanmiyur, Thoraipakkam,Vadapalani, Velachery, Egmore, Ekkattuthangal, Guindy, K.K.Nagar, Kilpauk, Kodambakkam, Madipakkam, Villivakkam, Virugambakkam and West Mambalam.

Our Adyar office is just few kilometre away from your location. If you need the best R training in Chennai, driving couple of extra kilometres is worth it!



Big Data Related Training Courses in Chennai



Big data Placement Training in Chennai


  • More than 2000+ students Trained
  • 95% percent Placement Record
  • 960+ Interviews Organized

Big data training Locations in Chennai


Our Big data Training centers


  • Adyar
  • Ambattur
  • Adambakkam
  • Anna Nagar
  • Anna Salai
  • Ashok Nagar
  • Choolaimedu
  • Chromepet
  • Ekkattuthangal
  • Guindy
  • Kodambakkam
  • Madipakkam
  • Mylapore
  • Porur
  • Saidapet
  • T. Nagar
  • Tambaram
  • Vadapalani
  • Velachery
  • Villivakkam
  • Virugambakkam

Big data training batch size in Chennai


Regular Batch ( Morning, Day time & Evening)


  • Seats Available : 8 (maximum)

Weekend Training Batch( Saturday, Sunday & Holidays)


  • Seats Available : 8 (maximum)

Fast Track batch


  • Seats Available : 5 (maximum)


Bigdata Hadoop Downloads


Bigdata Hadoop Syllabus Downloads


Tags: Big data Training in Chennai, Big data Training Chennai, Big data Training Institute in Chennai, Big data Training in Chennai Cost, Big Data Training center in Chennai, Hadoop Big data


Exela Guindy - Interview Questions


2nd round

1.procedure vs function
2.wat is trigger
3. Can we create index on view?
4.what is index?
3. Write a full query for a table t1 a 0,1,0,1 update 0 as 1 and update 1 as 0
4. Delete duplicate values in a table
5.write 2 max sal in a table


D & B ( Dun & Bradstreet ) INTERVIEW QUESTIONS

D & b second round interview questions

1.tell me ur roles and responsibilities
2. External table and table partitioning
3.in table partitioning using joing date of the employee do the interval partitioning
4.performance tuning
5. What are all the stored procedure u created
6. How to data migrate from one server to another server

BLUEALLY INFOTECH INDIA PRIVATE LIMITED

1. Tell about your self
2. Roles and responsibilities
3. How to insert and update in single query
4. Types of index
5. Explain bree and bitmap index
6. Diff view and mview
7. Type of constraints
8. Explain forgin key
9. Perform tuning
10. Explain about sequence
11. What is the cursor
12. Where u r using in your project
13. You have any thing with out using cursor to move the data one table to another table
14. Explain Triggers
15. Explain row level trigger and where u r using in your project
16. Explain about partition
17. Explain the range partition
18. Do you know unix
19. Do you know shell scripting
20. We are looking for DBA developers

Exela technology interview questions.. Face to face interview

Tell about your project
Working timing
What's index and type
What is view
What is synonym and purpose
What's constraints

Write query need answers like this Mahesh, suresh

Different between stored procedure and procedures
R u working now..
Y did u leave the job...
What is c in 12c what is g in 10and11g

Exela technologies int questions

1. Explain ur project
2. Write a query to get n th salary
3. Rank the employee using rownumber

4. Analytical fn
5. Synonym
6. View
7. Materialized view
Why we use mview
8. Index
D/b bit map and b tree index
9. Forward declaration
10. Pragma autonms trasctio
11. Constraints
D/b primary key and foreign kry
12 Stored procedure and local procedure
13. Update the gender f as m, m as f
14. Emp_name
Raja
Mani

Get output as raja,mani

Tekclan Software Solutions

1st round
1.Views
2.Index
3. difference between views and mviews
4.triggers
5.procedure
6.function
7.difference bw procedure and function
8. Difference bw joins and set operator
9 .joins types
10 cursor
11.exception
12.difference bw runtime error and syntax error
13.any ideas about MySQL
14.write program user defined exception
15. Difference between union and Union all
16.difference bw statement level trigger and row level trigger.
17.constraints
18 . difference between primary key and unique


1 What is materialized view. What types of refresh it have ?
2 What is index ? Types of index ?
3 What is collection ? Types of collection ?
4 Difference between index by table and nested table ?
5 If you declare index by table or nested table what will
Be the default value ?
6 What is cursor ? Why we are using cursor? Why can't we use index by table or nested table ? The difference?
7 Difference between function and piperow ?
8 How does the explain plan work ?
9 Do you know gather statistics ? Explain ?
10 How will you find tablelock ?
11 What is mutating error ?
12 What is exeception ? Types ?
13. Tell some predefined exception you gone through?
14 Why we use others in exception ?
15 what is hints ?
16 What is table partitioning ?
17 In bulk collect what is the use of save exception ?
18 You are using save exception how will you find the
Records that are not inserted ?
19 In table which scan will perform faster. Unique or Full scan ?
20 Difference between analytical function and aggregate functions ?
21 How you write rank functions ?

Question:

1.technical stuff which u have
2.how many incident u r working on daily basis
3.tell some frequently receiving incident and how u resolve it
4.How u r using SQL query in Ur current project
5.Last faced issue
6.future career plans for next two years
7.are u willing to learn new technologies
8.which one u r preferring support or Dev

CTS interview questions

1.what are the types of table partitions
2.do you known about composite partition.
3.view and materialized view different
4.delete and truncate different
5.aggregate function and analytical function different
6.rank and dense rank difference
7.what are the hints use used
8.how to insert bulk data
9.what is save exception
10.have u work with perform tunning
11.how will u to the query optimization
12.where will index scan increase the performance and where full scan increase the performance
13.have used utl function. Why u use utl function
14.about cursor

3i telephonic interview

1. About ur self
2.project
3.day to day activities
4.do you have experience in bulk collect, collection
5.do you have experience in creating table view constraints
6.in which place is your company.
7.company process
8.noties periods

2nd round:

1.Tell me about your self?
2.What is diff between stored procedure?
3. What is cursor?
4.what is view?
5. What is dbms jobs
6. What are indexs and types?
7. What is a package
8.
Salary table
Sal column:
15000
20000
14000
Write a query to get 1st name ,max of salary from salary table.

9. T1 table
A
B
A
B
Update B in place of A and A in place B. What is the query sir?

MSC technology 1st round questions:

1. Self intro
2. Day to day activities
3. Package and forward declaration
4.Difference between collection and record
5.identify the duplicate rows
6.Complexed need o/p like this
C
O
M
P
L
E
X
E
D
7. Gtt
8. In gtt concept one table created on commit preserve rows. User doing some activities in session 1 and suddenly he power off the system ,now the data will save in gtt table or not
8. Trigger
9.which methodology you are working
10. In toad what is the data module.

MSC TECHNOLOGY QUESTIONS

1.Tell about u r self
2. Synonyms and Sequence
3. What is primary key and how it works?
4. What is cursor and types?
5.tell how you work on release management.
6.write any procedure for % rowtype
7.views and mviews
8. Write correlated subquery
9. Exception and how its works?
10. What is know exception.
11. What is index? use of index in your project?
12 in Mview how we insert data and types of Mview?

MSc today interview question 1st round:

1.tell about yourself
2.explain your project
3.diff b/w views and tables
4.what is triggers and triggers type
5.diff b/w row level trigger and statement level trigger
6.diff b/w union and union all
7.what is index and it's type
8.diff b/w b*tree index and bitmap index
9.what is ref.cursor
10.what is join and it's type
11.diff b/w equi join and outer join
12.what is constraint and it's type
13.diff b/w primary constraint and unique constraint
14.what is Mviews
15.what is packages and uses of packages
16.what is Sequence and write the syntax
17.performance tuning
18.write the feature in 11G and 12c
19.A
1
2
3
6
7
How many time trigger fires in row level
And how many time trigger fires in Statement level above the table

MSC interview questions

1.what is table
2.what r the constraints
3.what is index and its types
4.what is joins and types of joins
5.why we use join
6.give join example based on ur project
7.what is table partitions why we are using
8.what is view
9.type of views
10.why we r using view
11.what is force view
12.what is materialized view and its type
13.why we use materialized view
14.what is materialized view u created explain based on ur project
15.how will log file created
16.what is methodology ur using
17.in performance tunning how will u find the problem line.
18.what is cursor and its type and attributes
19.write example for explicit and implicit cursor
20.why we are creating package
21.what is parent and child table name the table u created
22.explain correlated subquery
23.are u used toad tool
24.are u raised ticket
25.what are methods u follow to improve query performance
26.are u used trigger type of triggers
27.what trigger u used in ur project
28.write trigger to restrict data insertion in particular table
29.what are the collection types
30.where u used collection in ur project

Virtusa interview

1st round ques

1. Tell about yourself
2. What ticketing tool you have used?
3. Different between problem and request?
4. How did you response for the incident?
5. Create two tabels and perform inner join.
6. About aggregate function?
7. Write query to get count of students group by course id from stud and course table.
8. Willing to work in 23/7 shift?

2nd round

1. Tell about yourself
2. What is your role in your project.
3. Have you interact with customer's call?
4. Have worked in performance tuning and what is that?
5. Day to day activities?
6. What are the enhancement you have done?

Virtusa first round

1. Write a function with ref cursor

2. Write a dynamic sql

3. In sql what is Exit and In

4. Write a unix command to find sql files

5. How do you conncet proc with database

6. Tell me what you know about proc

7. How do you connect your database

8. Select 1 from employees what is the result..employees table contain 10 rows

9. What was your role in support

10. How will u handle the issues

11. How many character can a varchar2 handle

12. Write a query to change datatype for column

13.have you done any performance tuning

14. How do you know if there is a problem with index

15. Different types of index and how will you decide which index is better

Virtusa for Developing

1.Tell about yourself
2. Day to day activities
3. Incident , problem ticket tool
4. What is package and it’s advantanges
5. Materialized view
6. How the data will get update in M.view
7. How the M.view log will get created
8. Global temporary table
9. Write Refcursor program
10. Query for nth max salary
11. How will Analazy the query slowness and how will u rectify that
12. Performance tuning experience and example
13. Table t1
A b
1 A,b,c
2 A,b,c
O/p will be like
A B
1 A
1 B
1 C
2 A
2 B
3 C
14. What is collection
15. Dynamic sql
16. How to implement ddl in plsql
17. Index and when you use it
18. Do you know Pro*c?
19. Have experience in writing complex sql queries ?
20. Refresh methods in M.view and explain with real time
21. What is Utl_file ? Where it can be stored ?
22. What is debugg? Explain
23. Explain plan ?
24. Exception handling

Virtusa 1st round:

1. Tell about yourself
2. Sequence scenario based question
3. Synonym and it purpose.
4. Diff betw delete and truncate
5. How to load flat files into table
6. Regexp_substr scenario
7. Display negative and positive value from the table
8. Pro c basic
9. Whether dml operation can perform in external table and materialized view
10. Write Bulk collect and bulk bind with exception combined program to fetch data and insert data into table
11. Unix commands any
12. How to copy file from one server to another server
13. How to fetch first 10 data and last 10 data
14. How to edit particular char in the file and exact command
15. Some dba privileges.
16. Performance tuning approach for slower query in a table step by step
17. Table partition approach syntax any.

Second round

1. Tell about your project activity
2. Is multiple joins are possible in a table and how many condition can give.
3. Self join scenario
To retrive data from 3 table. Eg. Order, sales and supplier table
4. Pro c basics of our own.

Capgemini interview questions:

1. Self intro
2. Day to day activities
3. Cursor and advantages, which type of cursor using your project and reason
4.Trigger and where you used in your project
5. How to know exception handling and list out the few errors
6. Collection and types
7. Which concept you are using to load data in oracle database and explanation
8. Do u have performance tuning knowledge
9. Have you connected with server
10. Do u have knowledge in Unix comments

Msc technology 16/3/19

1. Self intro
2. Day to day activities
3. Cursor and advantages, which type of cursor using your project and reason
4.Trigger and where you used in your project
5. How to know exception handling and list out the few errors
6. Collection and types
7. Which concept you are using to load data in oracle database and explanation
8. Do u have performance tuning knowledge
9. Have you connected with server
10. Do u have knowledge in Unix comments

Capgemini interview questions

Tell about your project
Your role in your project
Difference b/w function and procedures
How many return statement we can use in function
Cursor
Cursor attributes
Use of refcursor
Table partition
Is it possible to create a partition already defined table,if yes then explain how .
Will u able to create primary key for already defined table
Use of index
Hints
In u r project where u use index
Performance tuning
Do u know any Oracle packages
Triggers
Use of UTL files

Capgemini - First round

Joins query with employee table
Diff bet view and table
Diff bet nested table , varry and nested table
about types of exception in collection
update mr if male and mrs if female in front of employee name in employee table. use cursor to process one by one record and use error log table to insert exception records captured
when a transaction will start in oracle
how will you identify a complex view and how to update a record in it
DB Architecture and normalisation
What happens when we get xception in plsql block
What happens internally when we perform dml and ddl operation in sql
How will u access a schema object from another schema
Can a column with Alias name be used in query stored in view
Query to display departments having more than two employees

Testimonials
best R training center in chennai

"The best thing about Greens Technologys Big data classes is that, it uses real examples in class. This gives a deeper understanding of the material as against me just looking at slides.

Karthik! I am really delighted about the Big data course and i am surprised to see the depth of your knowledge in all aspects of the bigdata. I see that many architects with over 15+ yrs experience doesn't have the knowledge that you have. I really enjoyed your sessions, definitely look forward to learn more from you in the future. Thanks again."


R training chennai

""Dear Karthik, R training has been outstanding. You have covered every aspect of the R which would boost the confidence of the attendee to dive into greater depths and face the interviews subsequently. I feel confident after attending the R course. I am sure you would be providing us your valuable high level guidence in our initial realtime project . Each of your session is a eye opener and it is a great joy to attend your R training. Thanks and Kindest Regards ""


R training classes in chennai

"I thought I knew R until I took this course. My company sent me here against my will. It was definitely worth and I found out how many things I was doing wrong. Karthik is awesome. but i got a lot inspired by you. I will keep in touch and will always try to learn from you as much as I can. Thanks once again Karthik"



Greens Technologys Overall Reviews


Greens Technologys Overall Reviews 5 out of 5 based on 17981 ratings. 17981 user reviews.
R training chennai

"""I think this is the best R course I have taken so far..Well I am still in the process of learning new things but for me this learning process has become so easy only after I joined this course..as Sajin is very organized and up to the point.. he knows what he is teaching and makes his point very clear by explaining numerous times. I would definitely recommend anyone who has any passion for Cloud.." ""



MOST POPULAR REGIONS

  • Big Data Training in Velachery
  • Big Data Training in Adyar
  • Big Data Training in Guindy
  • Big Data Training in Taramani
  • Big Data Training in OMR
  • Big Data Training in Pallikarnai
  • Big Data Training in Saidapet
  • Big Data Training in Vadapalani
  • Big Data Training in Koyambedu
  • Big Data Training in Porur
  • Big Data Training institute in Tambaram
  • Big Data Training institute in Velachery
  • Big Data Training institute in Adyar
  • Big Data Training institute in Chennai
  • Big Data Training institute in OMR