11. GeoMesa NiFi Bundle¶
NiFi manages large batches and streams of files and data. GeoMesa-NiFi allows you to ingest data into GeoMesa straight from NiFi by leveraging custom processors.
11.1. Installation¶
11.1.1. Get the Processors¶
The GeoMesa NiFi processors are available for download from GitHub.
Alternatively, you may build the processors from source. First, clone the project from GitHub. Pick a reasonable directory on your machine, and run:
$ git clone https://github.com/geomesa/geomesa-nifi.git
$ cd geomesa-nifi
To build the project, run:
$ mvn clean install
The nar contains bundled dependencies. To change the dependency versions, modify the version properties
(<hbase.version>
, etc) in the pom.xml
before building.
11.1.2. Install the Processors¶
To install the GeoMesa processors you will need to copy the nar files into the lib
directory of your
NiFi installation. There are currently three nar files:
geomesa-nifi-controllers-api-nar-$VERSION.nar
geomesa-nifi-controllers-nar-$VERSION.nar
geomesa-nifi-processors-nar-$VERSION.nar
If you downloaded the nars from GitHub:
$ wget "https://github.com/geomesa/geomesa-nifi/releases/download/geomesa-nifi-$VERSION/geomesa-nifi-$VERSION-dist.tar.gz"
$ tar -xf geomesa-nifi-$VERSION-dist.tar.gz --directory $NIFI_HOME/lib/
Or, to install the nars after building from source:
$ tar -xf geomesa-nifi-dist/target/geomesa-nifi-$VERSION-dist.tar.gz --directory $NIFI_HOME/lib/
11.2. Processors¶
GeoMesa NiFi contains several processors:
Processor | Description |
---|---|
PutGeoMesaAccumulo |
Ingest data into a GeoMesa Accumulo datastore with a GeoMesa converter or from GeoAvro |
PutGeoMesaHBase |
Ingest data into a GeoMesa HBase datastore with a GeoMesa converter or from GeoAvro |
PutGeoMesaFileSystem |
Ingest data into a GeoMesa File System datastore with a GeoMesa converter or from GeoAvro |
PutGeoMesaKafka |
Ingest data into a GeoMesa Kafka datastore with a GeoMesa converter or from GeoAvro |
PutGeoMesaRedis |
Ingest data into a GeoMesa Redis datastore with a GeoMesa converter or from GeoAvro |
PutGeoTools |
Ingest data into an arbitrary GeoTools datastore using a GeoMesa converter or GeoAvro |
ConvertToGeoAvro |
Use a GeoMesa converter to create GeoAvro |
11.2.1. Input Configuration¶
Most of the processors accept similar configuration parameters for specifying the input source. Each datastore-specific processor also has additional parameters for connecting to the datastore, detailed in the following sections.
Property | Description |
---|---|
Mode | Converter or Avro file ingest mode switch. |
SftName |
Name of the SFT on the classpath to use. This property overrides SftSpec. |
ConverterName |
Name of converter on the classpath to use. This property overrides ConverterSpec. |
FeatureNameOverride |
Override the feature name on ingest. |
SftSpec |
SFT specification String. Overwritten by SftName if SftName is valid. |
ConverterSpec |
Converter specification string. Overwritten by ConverterName if ConverterName is valid. |
ConverterErrorMode |
Override the converter error mode (skip-bad-records or raise-errors ) |
ConverterClasspath |
Additional resources to add to the classpath. |
BatchSize |
The number of flow files that will be processed in a single batch |
FeatureWriterCaching |
Enable caching of feature writers between flow files, useful if flow files have a small number of records (see below) |
FeatureWriterCacheTimeout |
How often feature writers will be flushed to the data store, if caching is enabled |
11.2.1.1. Defining SimpleFeatureTypes and Converters¶
The GeoMesa NiFi processors package a set of predefined SimpleFeatureType schema definitions and GeoMesa converter definitions for popular data sources such as Twitter, GDelt and OpenStreetMaps.
The full list of provided sources can be found in Prepackaged Converter Definitions.
For custom data sources, there are two ways of providing custom SFTs and converters:
11.2.1.1.1. Providing SimpleFeatureTypes and Converters on the Classpath¶
To bundle configuration in a JAR file simply place your config in a file named reference.conf
and place it at
the root level of a JAR file:
$ jar cvf data-formats.jar reference.conf
You can verify your JAR was built properly:
$ jar tvf data-formats.jar
0 Mon Mar 20 18:18:36 EDT 2017 META-INF/
69 Mon Mar 20 18:18:36 EDT 2017 META-INF/MANIFEST.MF
28473 Mon Mar 20 14:49:54 EDT 2017 reference.conf
Use the ConverterClasspath
property to point your processor to the JAR file. The property takes a list of
comma-delimited resources. Once set, the SftName
and/or ConverterName
properties will update with the
name of your converters. You will need to close the configuration panel and re-open it in order for the
properties to update.
11.2.1.1.2. Defining SimpleFeatureTypes and Converters via the UI¶
You may also provide SimpleFeatureTypes and Converters directly in the Processor configuration via the NiFi UI.
Simply paste your TypeSafe configuration into the SftSpec
and ConverterSpec
property fields.
11.2.1.2. Feature Writer Caching¶
Feature writer caching can be used to improve the throughput of processing many small flow files. Instead of a new feature writer being created for each flow file, writers are cached and re-used between operations. If a writer is idle for the configured timeout, then it will be flushed to the data store and closed.
Note that if feature writer caching is enabled, features that are processed may not show up in the data store immediately. In addition, any features that have been processed but not flushed may be lost if NiFi shuts down unexpectedly. To ensure data is properly flushed, stop the processor before shutting down NiFi.
Alternatively, NiFi’s built-in MergeContent
processor can be used to batch up small files.
11.2.2. PutGeoMesaAccumulo¶
The PutGeoMesaAccumulo
processor is used for ingesting data into an Accumulo-backed GeoMesa datastore. To use
this processor, first add it to the workspace and open the properties tab of its configuration. For a description
of the connection properties, see Accumulo Data Store Parameters.
11.2.2.1. GeoMesa Configuration Service¶
The PutGeoMesaAccumulo
plugin supports
NiFi Controller Services
to manage common configurations. This allows the user to specify a single location to store the Accumulo connection
parameters. This allows you to add new processors without having to enter duplicate data.
To add the AccumuloDataStoreConfigControllerService
access the Contoller Settings
from NiFi global menu and
navigate to the ControllerServices
tab and click the +
to add a new service. Search for the
AccumuloDataStoreConfigControllerService
and click add. Edit the new service and enter the appropriate values
for the properties listed.
After configuring the service, select the appropriate service in the GeoMesa Configuration Service
property
of your processor. When a controller service is selected the accumulo.zookeepers
, accumulo.instance.id
,
accumulo.user
, accumulo.password
and accumulo.catalog
parameters are not required or used.
11.2.3. PutGeoMesaHBase¶
The PutGeoMesaHBase
processor is used for ingesting data into an HBase-backed GeoMesa datastore. To use
this processor, first add it to the workspace and open the properties tab of its configuration. For a description
of the connection properties, see HBase Data Store Parameters.
11.2.4. PutGeoMesaFileSystem¶
The PutGeoMesaFileSystem
processor is used for ingesting data into a file system-backed GeoMesa datastore. To use
this processor, first add it to the workspace and open the properties tab of its configuration. For a description
of the connection properties, see FileSystem Data Store Parameters.
11.2.5. PutGeoMesaKafka¶
The PutGeoMesaKafka
processor is used for ingesting data into a
Kafka-backed GeoMesa datastore. This processor supports Kafka 0.9
and newer. To use this processor first add it to the workspace and open
the properties tab of its configuration. For a description
of the connection properties, see Kafka Data Store Parameters.
11.2.6. PutGeoMesaRedis¶
The PutGeoMesaRedis
processor is used for ingesting data into a Redis-backed GeoMesa datastore. To use this
processor first add it to the workspace and open the properties tab of its configuration. For a description
of the connection properties, see Redis Data Store Parameters.
11.2.7. PutGeoTools¶
The PutGeoTools
processor is used for ingesting data into any GeoTools
compatible datastore. To use this processor first add it to the
workspace and open the properties tab of its configuration.
Property | Description |
---|---|
DataStoreName | Name of the datastore to ingest data into. |
This processor also accepts dynamic parameters that may be needed for the specific datastore that you’re trying to access.
11.2.8. ConvertToGeoAvro¶
The ConvertToGeoAvro
processor leverages GeoMesa’s internal
converter framework to convert features into Avro and pass them along as
a flow to be used by other processors in NiFi. To use this processor
first add it to the workspace and open the properties tab of its
configuration.
Property | Description |
---|---|
OutputFormat | Only Avro is supported at this time. |
11.3. NiFi User Notes¶
NiFi allows you to ingest data into GeoMesa from every source GeoMesa supports and more. Some of these sources can be tricky to setup and configure. Here we detail some of the problems we’ve encountered and how to resolve them.
11.3.1. GetHDFS Processor with Azure Integration¶
It is possible to use the Hadoop Azure
Support
to access Azure Blob Storage using HDFS. You can leverage this
capability to have the GetHDFS processor pull data directly from the
Azure Blob store. However, due to how the GetHDFS processor was written,
the fs.defaultFS
configuration property is always used when
accessing wasb://
URIs. This means that the wasb://
container
you want the GetHDFS processor to connect to must be hard coded in the
HDFS core-site.xml
config. This causes two problems. Firstly, it
implies that we can only connect to one container in one account on
Azure. Secondly, it causes problems when using NiFi on a server that is
also running GeoMesa-Accumulo as the fs.defaultFS
property needs to
be set to the proper HDFS master NameNode.
There are two ways to circumvent this problem. You can maintain a
core-site.xml
file for each container you want to access but this is
not easily scalable or maintainable in the long run. The better option
is to leave the default fs.defaultFS
value in the HDFS
core-site.xml
file. We can then leverage the
Hadoop Configuration Resources
property in the GetHDFS processor.
Normally you would set the Hadoop Configuration Resources
property
to the location of the core-site.xml
and the hdfs-site.xml
files. Instead we are going to create an additional file and include it
last in the path so that it overwrites the value of the fs.defaultFS
when the configuration object is built. To do this simply create a new
xml file with an appropriate name (we suggest the name of the
container). Insert the fs.defaultFS
property into the file and set
the value.
<configuration>
<property>
<name>fs.defaultFS</name>
<value>wasb://container@accountName.blob.core.windows.net/</value>
</property>
</configuration>
11.4. Reference¶
For more information on setting up or using NiFi see the Apache NiFi User Guide