@SupportsBatching @SeeAlso(value=ConvertJSONToSQL.class) @InputRequirement(value=INPUT_REQUIRED) @Tags(value={"sql","put","rdbms","database","update","insert","relational"}) @CapabilityDescription(value="Executes a SQL UPDATE or INSERT command. The content of an incoming FlowFile is expected to be the SQL command to execute. The SQL command may use the ? to escape parameters. In this case, the parameters to use must exist as FlowFile attributes with the naming convention sql.args.N.type and sql.args.N.value, where N is a positive integer. The sql.args.N.type is expected to be a number indicating the JDBC Type. The content of the FlowFile is expected to be in UTF-8 format.") @ReadsAttribute(attribute="fragment.identifier",description="If the <Support Fragment Transactions> property is true, this attribute is used to determine whether or not two FlowFiles belong to the same transaction.") @ReadsAttribute(attribute="fragment.count",description="If the <Support Fragment Transactions> property is true, this attribute is used to determine how many FlowFiles are needed to complete the transaction.") @ReadsAttribute(attribute="fragment.index",description="If the <Support Fragment Transactions> property is true, this attribute is used to determine the order that the FlowFiles in a transaction should be evaluated.") @ReadsAttribute(attribute="sql.args.N.type",description="Incoming FlowFiles are expected to be parametrized SQL statements. The type of each Parameter is specified as an integer that represents the JDBC Type of the parameter.") @ReadsAttribute(attribute="sql.args.N.value",description="Incoming FlowFiles are expected to be parametrized SQL statements. The value of the Parameters are specified as sql.args.1.value, sql.args.2.value, sql.args.3.value, and so on. The type of the sql.args.1.value Parameter is specified by the sql.args.1.type attribute.") @ReadsAttribute(attribute="sql.args.N.format",description="This attribute is always optional, but default options may not always work for your data. Incoming FlowFiles are expected to be parametrized SQL statements. In some cases a format option needs to be specified, currently this is only applicable for binary data types, dates, times and timestamps. Binary Data Types (defaults to \'ascii\') - ascii: each string character in your attribute value represents a single byte. This is the format provided by Avro Processors. base64: the string is a Base64 encoded string that can be decoded to bytes. hex: the string is hex encoded with all letters in upper case and no \'0x\' at the beginning. Dates/Times/Timestamps - Date, Time and Timestamp formats all support both custom formats or named format (\'yyyy-MM-dd\',\'ISO_OFFSET_DATE_TIME\') as specified according to java.time.format.DateTimeFormatter. If not specified, a long value input is expected to be an unix epoch (milli seconds from 1970/1/1), or a string value in \'yyyy-MM-dd\' format for Date, \'HH:mm:ss.SSS\' for Time (some database engines e.g. Derby or MySQL do not support milliseconds and will truncate milliseconds), \'yyyy-MM-dd HH:mm:ss.SSS\' for Timestamp is used.") @WritesAttributes(value=@WritesAttribute(attribute="sql.generated.key",description="If the database generated a key for an INSERT statement and the Obtain Generated Keys property is set to true, this attribute will be added to indicate the generated key, if possible. This feature is not supported by all database vendors.")) public class PutSQL extends AbstractSessionFactoryProcessor
| Modifier and Type | Class and Description |
|---|---|
private static class |
PutSQL.FlowFilePoll
A simple, immutable data structure to hold a List of FlowFiles and an indicator as to whether
or not those FlowFiles represent a "fragmented transaction" - that is, a collection of FlowFiles
that all must be executed as a single transaction (we refer to it as a fragment transaction
because the information for that transaction, including SQL and the parameters, is fragmented
across multiple FlowFiles).
|
private static class |
PutSQL.FragmentedEnclosure |
private static class |
PutSQL.FunctionContext |
private static interface |
PutSQL.GroupingFunction |
private static class |
PutSQL.StatementFlowFileEnclosure
A simple, immutable data structure to hold a Prepared Statement and a List of FlowFiles
for which that statement should be evaluated.
|
(package private) static class |
PutSQL.TransactionalFlowFileFilter
A FlowFileFilter that is responsible for ensuring that the FlowFiles returned either belong
to the same "fragmented transaction" (i.e., 1 transaction whose information is fragmented
across multiple FlowFiles) or that none of the FlowFiles belongs to a fragmented transaction
|
| Constructor and Description |
|---|
PutSQL() |
getControllerServiceLookup, getIdentifier, getLogger, getNodeTypeProvider, init, initialize, isConfigurationRestored, isScheduled, toString, updateConfiguredRestoredTrue, updateScheduledFalse, updateScheduledTrueequals, getPropertyDescriptor, getPropertyDescriptors, getSupportedDynamicPropertyDescriptor, hashCode, onPropertyModified, validateclone, finalize, getClass, notify, notifyAll, wait, wait, waitisStatefulgetPropertyDescriptor, getPropertyDescriptors, onPropertyModified, validatestatic final PropertyDescriptor CONNECTION_POOL
static final PropertyDescriptor SQL_STATEMENT
static final PropertyDescriptor AUTO_COMMIT
static final PropertyDescriptor SUPPORT_TRANSACTIONS
static final PropertyDescriptor TRANSACTION_TIMEOUT
static final PropertyDescriptor BATCH_SIZE
static final PropertyDescriptor OBTAIN_GENERATED_KEYS
static final Relationship REL_SUCCESS
static final Relationship REL_RETRY
static final Relationship REL_FAILURE
private static final String FRAGMENT_ID_ATTR
private static final String FRAGMENT_INDEX_ATTR
private static final String FRAGMENT_COUNT_ATTR
private static final String ERROR_MESSAGE_ATTR
private static final String ERROR_CODE_ATTR
private static final String ERROR_SQL_STATE_ATTR
private PutGroup<PutSQL.FunctionContext,Connection,PutSQL.StatementFlowFileEnclosure> process
private BiFunction<PutSQL.FunctionContext,ErrorTypes,ErrorTypes.Result> adjustError
private ExceptionHandler<PutSQL.FunctionContext> exceptionHandler
private final PartialFunctions.FetchFlowFiles<PutSQL.FunctionContext> fetchFlowFiles
private final PartialFunctions.InitConnection<PutSQL.FunctionContext,Connection> initConnection
private final PutSQL.GroupingFunction groupFragmentedTransaction
private final PutSQL.GroupingFunction groupFlowFilesBySQLBatch
private final PutSQL.GroupingFunction groupFlowFilesBySQL
final PutGroup.GroupFlowFiles<PutSQL.FunctionContext,Connection,PutSQL.StatementFlowFileEnclosure> groupFlowFiles
final PutGroup.PutFlowFiles<PutSQL.FunctionContext,Connection,PutSQL.StatementFlowFileEnclosure> putFlowFiles
protected List<PropertyDescriptor> getSupportedPropertyDescriptors()
getSupportedPropertyDescriptors in class AbstractConfigurableComponentprotected final Collection<ValidationResult> customValidate(ValidationContext context)
customValidate in class AbstractConfigurableComponentpublic Set<Relationship> getRelationships()
getRelationships in interface ProcessorgetRelationships in class AbstractSessionFactoryProcessorprivate ExceptionHandler.OnError<PutSQL.FunctionContext,FlowFile> onFlowFileError(ProcessContext context, ProcessSession session, RoutingResult result)
private ExceptionHandler.OnError<RollbackOnFailure,PartialFunctions.FlowFileGroup> onGroupError(ProcessContext context, ProcessSession session, RoutingResult result)
private List<FlowFile> addErrorAttributesToFlowFilesInGroup(ProcessSession session, List<FlowFile> flowFilesOnRelationship, List<FlowFile> flowFilesInGroup, Exception exception)
private ExceptionHandler.OnError<PutSQL.FunctionContext,PutSQL.StatementFlowFileEnclosure> onBatchUpdateError(ProcessContext context, ProcessSession session, RoutingResult result)
@OnScheduled public void constructProcess()
public void onTrigger(ProcessContext context, ProcessSessionFactory sessionFactory) throws ProcessException
ProcessExceptionprivate PutSQL.FlowFilePoll pollFlowFiles(ProcessContext context, ProcessSession session, PutSQL.FunctionContext functionContext, RoutingResult result)
null.
Otherwise, a List of FlowFiles will be returned.
If all FlowFiles pulled are not eligible to be processed, the FlowFiles will be penalized and transferred back
to the input queue and an empty List will be returned.
Otherwise, if the Support Fragmented Transactions property is true, all FlowFiles that belong to the same
transaction will be sorted in the order that they should be evaluated.context - the process context for determining propertiessession - the process session for pulling FlowFilesnull if there are no FlowFiles to processprivate String determineGeneratedKey(PreparedStatement stmt)
null if no key
was generated or it could not be determined.stmt - the statement that generated a keynull if no key
was generated, or it could not be determined.private String getSQL(ProcessSession session, FlowFile flowFile)
session - the session that can be used to access the given FlowFileflowFile - the FlowFile whose SQL statement should be executedboolean isFragmentedTransactionReady(List<FlowFile> flowFiles, Long transactionTimeoutMillis) throws IllegalArgumentException
nullflowFiles - the FlowFiles whose relationship is to be determinedtransactionTimeoutMillis - the maximum amount of time (in milliseconds) that we should wait
for all FlowFiles in a transaction to be present before routing to failurenull if the FlowFiles
should instead be processedIllegalArgumentExceptionprivate FlowFile addErrorAttributesToFlowFile(ProcessSession session, FlowFile flowFile, Exception exception)
Copyright © 2023 Apache NiFi Project. All rights reserved.