- General Info
- Code Repo
- Coding Standards
- Continuous Integration
- Paillier Benchmarking
- Release Process
- How To Contribute
To check out the code:
git clone http://git.apache.org/incubator-pirk.git/
Then checkout the ‘master’ branch (which should be the default):
git checkout master
Apache License Header
Always add the current ASF license header as described here. Please use the provided ‘eclipse-pirk-template.xml’ code template file to automatically add the ASF header to new code.
Please do not use author tags; the code is developed and owned by the community.
Pirk follows coding style practices found in the eclipse-pirk-codestyle.xml file; please ensure that all contributions are formatted accordingly.
IDE Configuration Tips
- Import Formatter: Properties > Java Code Style > Formatter and import the eclipse-pirk-codestyle.xml file.
- Import Template: Properties > Java Code Style > Code Templates and import the eclipse-pirk-template.xml. Make sure to check the “Automatically add comments” box. This template adds the ASF header and so on for new code.
- Formatter plugin that uses eclipse code style xml.
Pirk Javadocs may be found here.
Pirk currently follows a simple Maven build with a single level pom.xml. As such, Pirk may be built via ‘mvn package’.
For convenience, the following POM files are included:
- pom.xml — Pirk pom file for Hadoop/YARN and Spark platforms
- pom-with-benchmarks.xml — Pirk pom file for running Paillier benchmarking testing
Pirk may be built with a specific pom file via ‘mvn package -f
JUnit in-memory unit and functional testing is performed by building with ‘mvn package’ or running the tests with ‘mvn test’. Specific tests may be run using the Maven command ‘mvn -Dtest=
Distributed functional testing may be performed on a cluster with the desired distributed computing technology installed. Currently, distributed implementations include batch processing in Hadoop MapReduce and Spark with inputs from HDFS or Elasticsearch.
To run all of the distributed functional tests on a cluster, the following ‘hadoop jar’ command may be used:
hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver -j <full path to pirkJar>
Specific distributed test suites may be run via providing corresponding command line options. The available options are given by the following command:
hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver --help
The Pirk functional tests using Spark run via utilizing the SparkLauncher via the ‘hadoop jar’ command (not by directly running with ’spark-submit’). To run successfully, the ‘spark.home’ property must be set correctly in the ‘pirk.properties’ file; ’spark-home’ is the directory containing ’bin/spark-submit’.
Pirk uses log4j for logging. The log4j.properties file may be edited to turn allow a ‘debug’ log level.
To build with benchmarks enabled, use:
mvn package -f pom-with-benchmarks.xml
To run the benchmarks, use:
java -jar target/benchmarks.jar
Optionally, you can reduce the number of times each benchmark is run (default is 10) using the -f flag. For example, to run each benchmark only twice, use: ’java -jar target/benchmarks.jar -f 2’
FYI - Right now this spits out a lot of logging errors as the logger fails to work while benchmarks are running. Ignore the many stack traces and wait for execution to complete to see statistics on the different benchmarks.
How to Contribute
Please see the How to Contribute page.