backtype.storm.task
Class ShellBolt

java.lang.Object
  extended by backtype.storm.task.ShellBolt
All Implemented Interfaces:
IBolt, java.io.Serializable
Direct Known Subclasses:
RichShellBolt

public class ShellBolt
extends java.lang.Object
implements IBolt

A bolt that shells out to another process to process tuples. ShellBolt communicates with that process over stdio using a special protocol. An ~100 line library is required to implement that protocol, and adapter libraries currently exist for Ruby and Python.

To run a ShellBolt on a cluster, the scripts that are shelled out to must be in the resources directory within the jar submitted to the master. During development/testing on a local machine, that resources directory just needs to be on the classpath.

When creating topologies using the Java API, subclass this bolt and implement the IRichBolt interface to create components for the topology that use other languages. For example:

 public class MyBolt extends ShellBolt implements IRichBolt {
      public MyBolt() {
          super("python", "mybolt.py");
      }

      public void declareOutputFields(OutputFieldsDeclarer declarer) {
          declarer.declare(new Fields("field1", "field2"));
      }
 }
 

See Also:
Serialized Form

Field Summary
static Logger LOG
           
 
Constructor Summary
ShellBolt(ShellComponent component)
           
 
Method Summary
 void cleanup()
          Called when an IBolt is going to be shutdown.
 void execute(Tuple input)
          Process a single tuple of input.
 void prepare(java.util.Map stormConf, TopologyContext context, OutputCollector collector)
          Called when a task for this component is initialized within a worker on the cluster.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

LOG

public static Logger LOG
Constructor Detail

ShellBolt

public ShellBolt(ShellComponent component)
Method Detail

prepare

public void prepare(java.util.Map stormConf,
                    TopologyContext context,
                    OutputCollector collector)
Description copied from interface: IBolt
Called when a task for this component is initialized within a worker on the cluster. It provides the bolt with the environment in which the bolt executes.

This includes the:

Specified by:
prepare in interface IBolt
Parameters:
stormConf - The Storm configuration for this bolt. This is the configuration provided to the topology merged in with cluster configuration on this machine.
context - This object can be used to get information about this task's place within the topology, including the task id and component id of this task, input and output information, etc.
collector - The collector is used to emit tuples from this bolt. Tuples can be emitted at any time, including the prepare and cleanup methods. The collector is thread-safe and should be saved as an instance variable of this bolt object.

execute

public void execute(Tuple input)
Description copied from interface: IBolt
Process a single tuple of input. The Tuple object contains metadata on it about which component/stream/task it came from. The values of the Tuple can be accessed using Tuple#getValue. The IBolt does not have to process the Tuple immediately. It is perfectly fine to hang onto a tuple and process it later (for instance, to do an aggregation or join).

Tuples should be emitted using the OutputCollector provided through the prepare method. It is required that all input tuples are acked or failed at some point using the OutputCollector. Otherwise, Storm will be unable to determine when tuples coming off the spouts have been completed.

For the common case of acking an input tuple at the end of the execute method, see IBasicBolt which automates this.

Specified by:
execute in interface IBolt
Parameters:
input - The input tuple to be processed.

cleanup

public void cleanup()
Description copied from interface: IBolt
Called when an IBolt is going to be shutdown. There is no guarentee that cleanup will be called, because the supervisor kill -9's worker processes on the cluster.

The one context where cleanup is guaranteed to be called is when a topology is killed when running Storm in local mode.

Specified by:
cleanup in interface IBolt