Many PDI applications require third-party Java libraries to perform tasks within PDI jobs and transformations. These libraries must be included in the class path of Hadoop mappers and reducers so PDI applications can use them in the Hadoop cluster. The best way to do this is to copy all dependent JARs to Hadoop’s Distributed Cache and add the following parameters to the Pentaho Map Reduce job step (in the User Defined tab):
The process of uploading the Custom JARs can be automated by implementing a PDI transformation that does the following:
- Take a list of the JARs the PDI application requires.
- Copy all the files from the local filesystem to a configured HDFS dir.
- Set a global variable that has a list of all the JARs with the fully qualified path in HDFS. This variable is then used to set the user defined variable given above in the Pentaho MapReduce job step.
You can download this PDI solution here.
This solution has been tested with PDI 4.4.