Packages

case class MergedParameters(parameters: Map[String, String]) extends Product with Serializable

Adds validators and accessors to string map

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MergedParameters
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MergedParameters(parameters: Map[String, String])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def autoPushdown: Boolean

    Snowflake automatically enable/disable pushdown function

  6. def awsAccessKey: Option[String]
  7. def awsSecretKey: Option[String]
  8. def azureSAS: Option[String]
  9. def bindVariableEnabled: Boolean
  10. def checkBucketConfiguration: Boolean

    Returns true if bucket lifecycle configuration should be checked

  11. def checkTableExistenceInCurrentSchemaOnly: Boolean
  12. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  13. def columnMap: Option[Map[String, String]]

    Retrieve Column mapping data.

    Retrieve Column mapping data. None if empty

  14. def columnMapping: String
  15. def columnMismatchBehavior: String
  16. def continueOnError: Boolean

    Set on_error parameter to continue in COPY command todo: create data validation function in spark side instead of using COPY COMMAND

  17. def createPerQueryTempDir(): String

    Creates a per-query subdirectory in the rootTempDir, with a random UUID.

  18. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  19. def expectedPartitionCount: Int
  20. def expectedPartitionSize: Long
  21. def extraCopyOptions: String

    Extra options to append to the Snowflake COPY command (e.g.

    Extra options to append to the Snowflake COPY command (e.g. "MAXERROR 100").

  22. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  24. def getQueryIDUrl(queryID: String): String
  25. def getQueryResultFormat: Option[String]

    Snowflake query result format

  26. def getTimeOutputFormat: Option[String]

    Snowflake time output format

  27. def isExecuteQueryWithSyncMode: Boolean
  28. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  29. def isSslON: Boolean
  30. def isTimestampSnowflake(timestampFormat: String): Boolean
  31. def isTimezoneSnowflake: Boolean
  32. def isTimezoneSnowflakeDefault: Boolean
  33. def isTimezoneSpark: Boolean
  34. def keepOriginalColumnNameCase: Boolean
  35. def maxRetryCount: Int
  36. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  37. def nonProxyHosts: Option[String]
  38. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  39. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  40. lazy val parallelism: Option[Int]

    Number of threads used for PUT/GET.

  41. val parameters: Map[String, String]
  42. def postActions: Array[String]

    List of semi-colon separated SQL statements to run after successful write operations.

    List of semi-colon separated SQL statements to run after successful write operations. This can be useful for running GRANT operations to make your new tables readable to other users and groups.

    If the action string contains %s, the table name will be substituted in, in case a staging table is being used.

    Defaults to empty.

  43. def preActions: Array[String]

    List of semi-colon separated SQL statements to run before write operations.

    List of semi-colon separated SQL statements to run before write operations. This can be useful for running DELETE operations to clean up data

    If the action string contains %s, the table name will be substituted in, in case a staging table is being used.

    Defaults to empty.

  44. def privateKey: Option[PrivateKey]

    Generate private key form pem key value

    Generate private key form pem key value

    returns

    private key object

  45. def proxyHost: Option[String]
  46. lazy val proxyInfo: Option[ProxyInfo]
  47. def proxyPassword: Option[String]
  48. def proxyPort: Option[String]
  49. def proxyUser: Option[String]
  50. def purge(): Boolean

    Whether or not to have PURGE in the COPY statement generated by the Spark connector

  51. def query: Option[String]

    The Snowflake query to be used as the target when loading data.

  52. def quoteJsonFieldName: Boolean
  53. lazy val rootTempDir: String

    A root directory to be used for intermediate data exchange, expected to be on cloud storage (S3 or Azure storage), or somewhere that can be written to and read from by Snowflake.

    A root directory to be used for intermediate data exchange, expected to be on cloud storage (S3 or Azure storage), or somewhere that can be written to and read from by Snowflake. Make sure that credentials are available for this cloud provider.

  54. lazy val rootTempDirStorageType: FSType
  55. def s3maxfilesize: String

    Max file size used to move data out from Snowflake

  56. def setColumnMap(fromSchema: Option[StructType], toSchema: Option[StructType]): Unit

    set column map

  57. def sfAccount: Option[String]

    Snowflake account - optional

  58. def sfAuthenticator: Option[String]

    Mapping OAuth and authenticator values

  59. def sfCompress: Boolean

    Snowflake use compression on/off - "on" by default

  60. def sfDatabase: String

    Snowflake database name

  61. def sfExtraOptions: Map[String, AnyRef]

    Returns a map of options that are not known to the connector, and are passed verbosely to the JDBC driver

  62. def sfFullURL: String

    URL pointing to the snowflake database including protocol.

    URL pointing to the snowflake database including protocol. for example, https://host:port

  63. def sfPassword: String

    Snowflake password

  64. def sfRole: Option[String]

    Snowflake role - optional

  65. def sfSSL: String

    Snowflake SSL on/off - "on" by default

  66. def sfSchema: String

    Snowflake schema

  67. def sfTimestampLTZOutputFormat: Option[String]
  68. def sfTimestampNTZOutputFormat: Option[String]
  69. def sfTimestampTZOutputFormat: Option[String]
  70. def sfTimezone: Option[String]

    Snowflake timezone- optional

  71. def sfToken: Option[String]
  72. def sfURL: String

    URL pointing to the snowflake database, simply host:port

  73. def sfUser: String

    Snowflake user

  74. def sfWarehouse: Option[String]

    Snowflake warehouse

  75. def skipWriteWhenWritingEmptyDataFrame: Boolean
  76. def stagingTableNameRemoveQuotesOnly: Boolean
  77. def storagePath: Option[String]
  78. def streamingStage: Option[String]
  79. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  80. def table: Option[TableName]

    The Snowflake table to be used as the target when loading or writing data.

  81. def temporaryAWSCredentials: Option[AWSCredentials]

    Temporary AWS credentials which are passed to Snowflake.

    Temporary AWS credentials which are passed to Snowflake. These only need to be supplied by the user when Hadoop is configured to authenticate to S3 via IAM roles assigned to EC2 instances.

  82. def temporaryAzureStorageCredentials: Option[StorageCredentialsSharedAccessSignature]

    SAS Token to be passed to Snowflake to access data in Azure storage.

    SAS Token to be passed to Snowflake to access data in Azure storage. We currently don't support full storage account key so this has to be provided if customer would like to load data through their storage account directly.

  83. def toString(): String
    Definition Classes
    MergedParameters → AnyRef → Any
  84. def truncateColumns(): Boolean

    Whether or not to have TRUNCATE_COLUMNS in the COPY statement generated by the Spark connector.

  85. def truncateTable: Boolean

    Truncate table when overwriting.

    Truncate table when overwriting. Keep the table schema

  86. def uploadChunkSize: Int
  87. def useAWSRegionURL: Boolean
  88. def useAwsMultiplePartsUpload: Boolean
  89. def useCopyUnload: Boolean
  90. def useExponentialBackoff: Boolean
  91. def useProxy: Boolean

    Proxy related parameters.

  92. def useStagingTable: Boolean

    When true, data is always loaded into a new temporary table when performing an overwrite.

    When true, data is always loaded into a new temporary table when performing an overwrite. This is to ensure that the whole load process succeeds before dropping any data from Snowflake, which can be useful if, in the event of failures, stale data is better than no data for your systems.

    Defaults to true.

  93. lazy val usingExternalStage: Boolean
  94. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  95. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  96. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped