Pig has two EXECUTION TYPES or run modes, local and Hadoop (currently called MapReduce)
%pig-x local
Grant>
i.e for pseudo distributed setup, an example is
fs.default.name=hdfs://localhost/
mapped. job. tracker=local host:8021
Once you have configured pig to connect to Hadoop cluster, you can launch pig, setting the -x option to map reduce or a map-reduce mode is default i.e. % pig grunt>
This section shows you how to run Pig in local mode, using the Grunt shell, a Pig script, and an embedded program.
There are three ways of executing pig programs which work in both local and map reduce mode.
1. Script-pig can run a script file that contains pig commands
2. Grunt – Is an interactive shell for running pig commands
3. Embedded – you can run pig programs from Java using the pig server class, much like you can use JDBC to run SQL Programs from JAVA.
You can execute Pig Latin statements.
Local mode and MR or HDFS Mode
In the local mode execution of pig, we expect all the inputs from the local environment (input from the local mode and produce the output from the local mode but should not be from HDFS)
Syntax:- Pig-x local.
In MR mode execution of pig, we expect all input files from HDFS path and also produce the output on top of hdfs only
Syntax:- Pig(or)
The grunt mode can also be called an interactive mode. Grunt is pig’s interactive shell. It is started when no file is specified for a pig to run.
Grunt shell is an interactive base shell where you will expect the o/p then and there only, irrespective of the input.
In script mode, pig runs the commands specified in a script file.
Here, we will describe all transformations in a single file which ends with pig. All the commands will be executed one after another which are there in .pig file.
Local Map Reduce
Pig- x local abc.pig Pig abc.pig
In this mode, we will do the pig customization code if at all sane analyzer functionality is not archived through the user-defined transformations.
Then we can write Java code to archive the same and reference the same Java code (.jar file) in our pig script by making use of the code.
“Register xyz.jar”
Note:- Register keyword should be the first statement within our pig script code.
Hadoop Administration | MapReduce |
Big Data On AWS | Informatica Big Data Integration |
Bigdata Greenplum DBA | Informatica Big Data Edition |
Hadoop Hive | Impala |
Hadoop Testing | Apache Mahout |
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
Hadoop Training | Nov 19 to Dec 04 | View Details |
Hadoop Training | Nov 23 to Dec 08 | View Details |
Hadoop Training | Nov 26 to Dec 11 | View Details |
Hadoop Training | Nov 30 to Dec 15 | View Details |
Technical Content Writer