• 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏吧

Java SqoopException类的典型用法和代码示例

java 1次浏览

本文整理汇总了Java中org.apache.sqoop.common.SqoopException的典型用法代码示例。如果您正苦于以下问题:Java SqoopException类的具体用法?Java SqoopException怎么用?Java SqoopException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。

SqoopException类属于org.apache.sqoop.common包,在下文中一共展示了SqoopException类的40个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: errorHook

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Error hook installed to Groovy shell.
 *
 * Will display exception that appeared during executing command. In most
 * cases we will simply delegate the call to printing throwable method,
 * however in case that we've received ClientError.CLIENT_0006 (server
 * exception), we will unwrap server issue and view only that as local
 * context shouldn't make any difference.
 *
 * @param t Throwable to be displayed
 */
public static void errorHook(Throwable t) {
  // Based on the kind of exception we are dealing with, let's provide different user experince
  if(t instanceof SqoopException && ((SqoopException)t).getErrorCode() == ShellError.SHELL_0006) {
    println("@|red Server has returned exception: |@");
    printThrowable(t.getCause(), isVerbose());
  } else if(t instanceof SqoopException && ((SqoopException)t).getErrorCode() == ShellError.SHELL_0003) {
    print("@|red Invalid command invocation: |@");
    // In most cases the cause will be actual parsing error, so let's print that alone
    if (t.getCause() != null) {
      println(t.getCause().getMessage());
    } else {
      println(t.getMessage());
    }
  } else if(t.getClass() == MissingPropertyException.class) {
    print("@|red Unknown command: |@");
    println(t.getMessage());
  } else {
    println("@|red Exception has occurred during processing command |@");
    printThrowable(t, isVerbose());
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:33,
代码来源:ThrowableDisplayer.java

示例2: assertJobSubmissionFailure

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public void assertJobSubmissionFailure(MJob job, String ...fragments) throws Exception {
  // Try to execute the job and verify that the it was not successful
  try {
    executeJob(job);
    fail("Expected failure in the job submission.");
  } catch (SqoopException ex) {
    // Top level exception should be CLIENT_0001
    assertEquals(ClientError.CLIENT_0001, ex.getErrorCode());

    // We can directly verify the ErrorCode from SqoopException as client side
    // is not rebuilding SqoopExceptions per missing ErrorCodes. E.g. the cause
    // will be generic Throwable and not SqoopException instance.
    Throwable cause = ex.getCause();
    assertNotNull(cause);

    for(String fragment : fragments) {
      assertTrue(cause.getMessage().contains(fragment), "Expected fragment " + fragment + " in error message " + cause.getMessage());
    }
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:21,
代码来源:OutputDirectoryTest.java

示例3: executeValidator

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Execute single validator.
 *
 * @param object Input, Config or Class instance
 * @param validator Validator annotation
 * @return
 */
private AbstractValidator executeValidator(Object object, Validator validator) {
  // Try to get validator instance from the cache
  AbstractValidator instance = cache.get(validator.value());

  if(instance == null) {
    instance = (AbstractValidator) ClassUtils.instantiate(validator.value());

    // This could happen if we would be missing some connector's jars on our classpath
    if(instance == null) {
      throw new SqoopException(ConfigValidationError.VALIDATION_0004, validator.value().getName());
    }

    cache.put(validator.value(), instance);
  } else {
    instance.reset();
  }

  instance.setStringArgument(validator.strArg());
  instance.validate(object);
  return instance;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:29,
代码来源:ConfigValidationRunner.java

示例4: writeContent

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private void writeContent() {
  try {
    if (LOG.isDebugEnabled()) {
      LOG.debug("Extracted data: " + fromIDF.getCSVTextData());
    }
    // NOTE: The fromIDF and the corresponding fromSchema is used only for the
    // matching process
    // The output of the mappers is finally written to the toIDF object after
    // the matching process
    // since the writable encapsulates the toIDF ==> new SqoopWritable(toIDF)
    toIDF.setObjectData(matcher.getMatchingData(fromIDF.getObjectData()));
    // NOTE: We do not use the reducer to do the writing (a.k.a LOAD in ETL).
    // Hence the mapper sets up the writable
    context.write(new Text(toIDF.getCSVTextData()), NullWritable.get());
  } catch (Exception e) {
    throw new SqoopException(MRExecutionError.MAPRED_EXEC_0013, e);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:19,
代码来源:SqoopDataWriter.java

示例5: executeFunction

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Override
public Object executeFunction(CommandLine line, boolean isInteractive) {
  if (line.hasOption(Constants.OPT_PRINCIPAL) ^ line.hasOption(Constants.OPT_PRINCIPAL_TYPE)) {
    throw new SqoopException(ShellError.SHELL_0003,
        ShellEnvironment.getResourceBundle().getString(Constants.RES_SHOW_ROLE_BAD_ARGUMENTS_PRINCIPAL_TYPE));
  }

  MPrincipal principal = (line.hasOption(Constants.OPT_PRINCIPAL))
      ? new MPrincipal(
      line.getOptionValue(Constants.OPT_PRINCIPAL),
      line.getOptionValue(Constants.OPT_PRINCIPAL_TYPE))
      : null;

  showRoles(principal);

  return Status.OK;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:18,
代码来源:ShowRoleFunction.java

示例6: deleteLink

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void deleteLink(final long linkId) {
  doWithConnection(new DoWithConnection() {
    @Override
    public Object doIt(Connection conn) {
      if(!handler.existsLink(linkId, conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0017,
          "Invalid id: " + linkId);
      }
      if(handler.inUseLink(linkId, conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0021,
          "Id in use: " + linkId);
      }

      handler.deleteLink(linkId, conn);
      return null;
    }
  });
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:23,
代码来源:JdbcRepository.java

示例7: filterSubmission

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Filter resources, get all valid resources from all resources
 */
public static List<MSubmission> filterSubmission(List<MSubmission> submissions) throws SqoopException {
  Collection<MSubmission> collection = Collections2.filter(submissions, new Predicate<MSubmission>() {
    @Override
    public boolean apply(MSubmission input) {
      try {
        String jobId = String.valueOf(input.getJobId());
        checkPrivilege(getPrivilege(MResource.TYPE.JOB, jobId, MPrivilege.ACTION.READ));
        // add valid submission
        return true;
      } catch (Exception e) {
        //do not add into result if invalid submission
        return false;
      }
    }
  });
  return Lists.newArrayList(collection);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:21,
代码来源:AuthorizationEngine.java

示例8: updateLink

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void updateLink(final MLink link, RepositoryTransaction tx) {
  doWithConnection(new DoWithConnection() {
    @Override
    public Object doIt(Connection conn) {
      if (!link.hasPersistenceId()) {
        throw new SqoopException(RepositoryError.JDBCREPO_0016);
      }
      if (!handler.existsLink(link.getPersistenceId(), conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0017, "Invalid id: "
            + link.getPersistenceId());
      }

      handler.updateLink(link, conn);
      return null;
    }
  }, (JdbcRepositoryTransaction) tx);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:22,
代码来源:JdbcRepository.java

示例9: existsLink

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public boolean existsLink(long linkId, Connection conn) {
  PreparedStatement stmt = null;
  ResultSet rs = null;
  try {
    stmt = conn.prepareStatement(crudQueries.getStmtSelectLinkCheckById());
    stmt.setLong(1, linkId);
    rs = stmt.executeQuery();

    // Should be always valid in query with count
    rs.next();

    return rs.getLong(1) == 1;
  } catch (SQLException ex) {
    logException(ex, linkId);
    throw new SqoopException(CommonRepositoryError.COMMON_0022, ex);
  } finally {
    closeResultSets(rs);
    closeStatements(stmt);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:25,
代码来源:CommonRepositoryHandler.java

示例10: enableJob

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void enableJob(final long id, final boolean enabled) {
  doWithConnection(new DoWithConnection() {
    @Override
    public Object doIt(Connection conn) {
      if(!handler.existsJob(id, conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0020,
          "Invalid id: " + id);
      }

      handler.enableJob(id, enabled, conn);
      return null;
    }
  });
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:19,
代码来源:JdbcRepository.java

示例11: deleteLink

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void deleteLink(long linkId, Connection conn) {
  PreparedStatement dltConn = null;

  try {
    deleteLinkInputs(linkId, conn);
    dltConn = conn.prepareStatement(crudQueries.getStmtDeleteLink());
    dltConn.setLong(1, linkId);
    dltConn.executeUpdate();
  } catch (SQLException ex) {
    logException(ex, linkId);
    throw new SqoopException(CommonRepositoryError.COMMON_0019, ex);
  } finally {
    closeStatements(dltConn);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:20,
代码来源:CommonRepositoryHandler.java

示例12: findSubmissions

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public List<MSubmission> findSubmissions(Connection conn) {
  List<MSubmission> submissions = new LinkedList<MSubmission>();
  PreparedStatement stmt = null;
  ResultSet rs = null;
  try {
    stmt = conn.prepareStatement(crudQueries.getStmtSelectSubmissions());
    rs = stmt.executeQuery();

    while(rs.next()) {
      submissions.add(loadSubmission(rs, conn));
    }

    rs.close();
    rs = null;
  } catch (SQLException ex) {
    logException(ex);
    throw new SqoopException(CommonRepositoryError.COMMON_0036, ex);
  } finally {
    closeResultSets(rs);
    closeStatements(stmt);
  }

  return submissions;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:29,
代码来源:CommonRepositoryHandler.java

示例13: testConnectorConfigUpgradeHandlerWithFindLinksForConnectorError

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Test the exception handling procedure when the database handler fails to
 * find links for a given connector
 */
@Test
public void testConnectorConfigUpgradeHandlerWithFindLinksForConnectorError() {
  MConnector newConnector = connector(1, "1.1");
  MConnector oldConnector = connector(1);

  SqoopConnector sqconnector = mock(SqoopConnector.class);
  when(sqconnector.getConfigurableUpgrader()).thenReturn(connectorUpgraderMock);
  when(connectorMgrMock.getSqoopConnector(anyString())).thenReturn(sqconnector);

  SqoopException exception = new SqoopException(RepositoryError.JDBCREPO_0000,
      "find links for connector error.");
  doThrow(exception).when(repoHandlerMock).findLinksForConnector(anyLong(), any(Connection.class));

  try {
    repoSpy.upgradeConnector(oldConnector, newConnector);
  } catch (SqoopException ex) {
    assertEquals(ex.getMessage(), exception.getMessage());
    verify(repoHandlerMock, times(1)).findLinksForConnector(anyLong(), any(Connection.class));
    verifyNoMoreInteractions(repoHandlerMock);
    return ;
  }

  fail("Should throw out an exception with message: " + exception.getMessage());
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:29,
代码来源:TestJdbcRepository.java

示例14: toAVRO

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private GenericRecord toAVRO(String csv) {

    String[] csvStringArray = parseCSVString(csv);

    if (csvStringArray == null) {
      return null;
    }
    Column[] columns = schema.getColumnsArray();
    if (csvStringArray.length != columns.length) {
      throw new SqoopException(IntermediateDataFormatError.INTERMEDIATE_DATA_FORMAT_0001,
          "The data " + csv + " has the wrong number of fields.");
    }
    GenericRecord avroObject = new GenericData.Record(avroSchema);
    for (int i = 0; i < csvStringArray.length; i++) {
      if (csvStringArray[i].equals(NULL_VALUE) && !columns[i].isNullable()) {
        throw new SqoopException(IntermediateDataFormatError.INTERMEDIATE_DATA_FORMAT_0005,
            columns[i].getName() + " does not support null values");
      }
      if (csvStringArray[i].equals(NULL_VALUE)) {
        avroObject.put(columns[i].getName(), null);
        continue;
      }
      avroObject.put(columns[i].getName(), toAVRO(csvStringArray[i], columns[i]));
    }
    return avroObject;
  }
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:27,
代码来源:AVROIntermediateDataFormat.java

示例15: testDriverConfigUpgradeHandlerWithFindJobsError

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Test the exception handling procedure when the database handler fails to
 * find jobs for driverConfig
 */
@Test
public void testDriverConfigUpgradeHandlerWithFindJobsError() {
  MDriver newDriverConfig = driver();

  when(driverMock.getConfigurableUpgrader()).thenReturn(driverUpgraderMock);

  SqoopException exception = new SqoopException(RepositoryError.JDBCREPO_0000,
      "find jobs error.");
  doThrow(exception).when(repoHandlerMock).findJobs(any(Connection.class));

  try {
    repoSpy.upgradeDriver(newDriverConfig);
  } catch (SqoopException ex) {
    assertEquals(ex.getMessage(), exception.getMessage());
    verify(repoHandlerMock, times(1)).findJobs(any(Connection.class));
    verifyNoMoreInteractions(repoHandlerMock);
    return ;
  }

  fail("Should throw out an exception with message: " + exception.getMessage());
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:26,
代码来源:TestJdbcRepository.java

示例16: existTable

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public boolean existTable(String table) {
  try {
    String[] splitNames = dequalify(table);

    DatabaseMetaData dbmd = connection.getMetaData();
    ResultSet rs = dbmd.getTables(null, splitNames[0], splitNames[1], null);

    if (rs.next()) {
      return true;
    } else {
      return false;
    }

  } catch (SQLException e) {
    logSQLException(e);
    throw new SqoopException(GenericJdbcConnectorError.GENERIC_JDBC_CONNECTOR_0003, e);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:19,
代码来源:GenericJdbcExecutor.java

示例17: findJob

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public MJob findJob(String name, Connection conn) {
  PreparedStatement stmt = null;
  try {
    stmt = conn.prepareStatement(crudQueries.getStmtSelectJobSingleByName());
    stmt.setString(1, name);

    List<MJob> jobs = loadJobs(stmt, conn);

    if (jobs.size() != 1) {
      return null;
    }

    // Return the first and only one link object
    return jobs.get(0);

  } catch (SQLException ex) {
    logException(ex, name);
    throw new SqoopException(CommonRepositoryError.COMMON_0028, ex);
  } finally {
    closeStatements(stmt);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:27,
代码来源:CommonRepositoryHandler.java

示例18: testOutputDirectoryIsAFile

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Test(expectedExceptions = SqoopException.class)
public void testOutputDirectoryIsAFile() throws Exception {
  File file = File.createTempFile("MastersOfOrion", ".txt");
  file.createNewFile();

  LinkConfiguration linkConfig = new LinkConfiguration();
  ToJobConfiguration jobConfig = new ToJobConfiguration();

  linkConfig.linkConfig.uri = "file:///";
  jobConfig.toJobConfig.outputDirectory = file.getAbsolutePath();

  InitializerContext initializerContext = new InitializerContext(new MutableMapContext());

  Initializer initializer = new HdfsToInitializer();
  initializer.initialize(initializerContext, linkConfig, jobConfig);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:17,
代码来源:TestToInitializer.java

示例19: testOutputDirectoryIsNotEmpty

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Test(expectedExceptions = SqoopException.class)
public void testOutputDirectoryIsNotEmpty() throws Exception {
  File dir = Files.createTempDir();
  File file = File.createTempFile("MastersOfOrion", ".txt", dir);

  LinkConfiguration linkConfig = new LinkConfiguration();
  ToJobConfiguration jobConfig = new ToJobConfiguration();

  linkConfig.linkConfig.uri = "file:///";
  jobConfig.toJobConfig.outputDirectory = dir.getAbsolutePath();

  InitializerContext initializerContext = new InitializerContext(new MutableMapContext());

  Initializer initializer = new HdfsToInitializer();
  initializer.initialize(initializerContext, linkConfig, jobConfig);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:17,
代码来源:TestToInitializer.java

示例20: insertConfigsForDriver

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Helper method to insert the configs from the  into the
 * repository.
 *
 * @param mDriver The driver instance to use to upgrade.
 * @param conn    JDBC link to use for updating the configs
 */
private void insertConfigsForDriver(MDriver mDriver, Connection conn) {
  long driverId = mDriver.getPersistenceId();
  PreparedStatement baseConfigStmt = null;
  PreparedStatement baseInputStmt = null;
  try {
    baseConfigStmt = conn.prepareStatement(crudQueries.getStmtInsertIntoConfig(),
        Statement.RETURN_GENERATED_KEYS);

    baseInputStmt = conn.prepareStatement(crudQueries.getStmtInsertIntoInput(),
        Statement.RETURN_GENERATED_KEYS);

    // Register a driver config as a job type with no direction
    registerConfigs(driverId, null, mDriver.getDriverConfig().getConfigs(),
        MConfigType.JOB.name(), baseConfigStmt, baseInputStmt, conn);

  } catch (SQLException ex) {
    throw new SqoopException(CommonRepositoryError.COMMON_0011, mDriver.toString(), ex);
  } finally {
    closeStatements(baseConfigStmt, baseInputStmt);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:29,
代码来源:CommonRepositoryHandler.java

示例21: getFrameworkType

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private OptionValue.FrameworkType getFrameworkType(CommandLine commandLine)
{
    if (!commandLine.hasOption(SubmitOptions.FRAMEWORK_TYPE.getValue()))
    {
        return null;
    }
    
    String type = commandLine.getOptionValue(SubmitOptions.FRAMEWORK_TYPE.getValue());
    if (OptionValue.FrameworkType.HDFS.getType().equals(type))
    {
        return OptionValue.FrameworkType.HDFS;
    }
    else if (OptionValue.FrameworkType.HBASE.getType().equals(type))
    {
        return OptionValue.FrameworkType.HBASE;
    }
    else
    {
        throw new SqoopException(ShellError.OPTIONS_INVALID, "Unsupported framework type: " + type);
    }
}
 

开发者ID:eSDK,
项目名称:esdk_bigdata_loader_shell_client,
代码行数:22,
代码来源:SubmitJob.java

示例22: getAuthorizationAccessController

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public static AuthorizationAccessController getAuthorizationAccessController(String accessController) throws ClassNotFoundException, IllegalAccessException, InstantiationException {

    Class<?> accessControllerClass = ClassUtils.loadClass(accessController);

    if (accessControllerClass == null) {
      throw new SqoopException(SecurityError.AUTH_0008,
              "Authorization Access Controller Class is null: " + accessController);
    }

    AuthorizationAccessController newAccessController;
    try {
      newAccessController = (AuthorizationAccessController) accessControllerClass.newInstance();
    } catch (Exception ex) {
      throw new SqoopException(SecurityError.AUTH_0008,
              "Authorization Access Controller Class Exception: " + accessController, ex);
    }
    return newAccessController;
  }
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:19,
代码来源:SecurityFactory.java

示例23: enableLink

点赞 3

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void enableLink(long linkId, boolean enabled, Connection conn) {
  PreparedStatement enableConn = null;

  try {
    enableConn = conn.prepareStatement(crudQueries.getStmtEnableLink());
    enableConn.setBoolean(1, enabled);
    enableConn.setLong(2, linkId);
    enableConn.executeUpdate();
  } catch (SQLException ex) {
    logException(ex, linkId);
    throw new SqoopException(CommonRepositoryError.COMMON_0038, ex);
  } finally {
    closeStatements(enableConn);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:20,
代码来源:CommonRepositoryHandler.java

示例24: call

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Override
public Iterable<Void> call(Iterator<List<IntermediateDataFormat<?>>> data) throws Exception {

  long reduceTime = System.currentTimeMillis();

  String loaderName = request.getDriverContext().getString(JobConstants.JOB_ETL_LOADER);
  Schema fromSchema = request.getJobSubmission().getFromSchema();
  Schema toSchema = request.getJobSubmission().getToSchema();
  Matcher matcher = MatcherFactory.getMatcher(fromSchema, toSchema);

  LOG.info("Sqoop Load Function is  starting");
  try {
    while (data.hasNext()) {
      DataReader reader = new SparkDataReader(data.next());

      Loader loader = (Loader) ClassUtils.instantiate(loaderName);

      SparkPrefixContext subContext = new SparkPrefixContext(request.getConf(),
          JobConstants.PREFIX_CONNECTOR_TO_CONTEXT);

      Object toLinkConfig = request.getConnectorLinkConfig(Direction.TO);
      Object toJobConfig = request.getJobConfig(Direction.TO);

      // Create loader context
      LoaderContext loaderContext = new LoaderContext(subContext, reader, matcher.getToSchema());

      LOG.info("Running loader class " + loaderName);
      loader.load(loaderContext, toLinkConfig, toJobConfig);
      System.out.println("Loader has finished");
      System.out.println(">>> REDUCE time ms:" + (System.currentTimeMillis() - reduceTime));
    }
  } catch (Throwable t) {
    LOG.error("Error while loading data out of MR job.", t);
    throw new SqoopException(SparkExecutionError.SPARK_EXEC_0000, t);
  }

  return Collections.singletonList(null);

}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:40,
代码来源:SqoopLoadFunction.java

示例25: registerDriver

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Pre-register Driver since the 1.99.3 release NOTE: This should be used only
 * in the upgrade path
 */
@Deprecated
protected long registerDriver(Connection conn) {
  if (LOG.isTraceEnabled()) {
    LOG.trace("Begin Driver loading.");
  }

  PreparedStatement baseDriverStmt = null;
  try {
    baseDriverStmt = conn.prepareStatement(STMT_INSERT_INTO_CONFIGURABLE,
        Statement.RETURN_GENERATED_KEYS);
    baseDriverStmt.setString(1, MDriver.DRIVER_NAME);
    baseDriverStmt.setString(2, Driver.getClassName());
    baseDriverStmt.setString(3, "1");
    baseDriverStmt.setString(4, MConfigurableType.DRIVER.name());

    int baseDriverCount = baseDriverStmt.executeUpdate();
    if (baseDriverCount != 1) {
      throw new SqoopException(DerbyRepoError.DERBYREPO_0003, Integer.toString(baseDriverCount));
    }

    ResultSet rsetDriverId = baseDriverStmt.getGeneratedKeys();

    if (!rsetDriverId.next()) {
      throw new SqoopException(DerbyRepoError.DERBYREPO_0004);
    }
    return rsetDriverId.getLong(1);
  } catch (SQLException ex) {
    throw new SqoopException(DerbyRepoError.DERBYREPO_0009, ex);
  } finally {
    closeStatements(baseDriverStmt);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:37,
代码来源:DerbyRepositoryHandler.java

示例26: findSubmissionsForJob

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@SuppressWarnings("unchecked")
@Override
public List<MSubmission> findSubmissionsForJob(final long jobId) {
  return (List<MSubmission>) doWithConnection(new DoWithConnection() {
    @Override
    public Object doIt(Connection conn) throws Exception {
      if(!handler.existsJob(jobId, conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0020,
          "Invalid id: " + jobId);
      }
      return handler.findSubmissionsForJob(jobId, conn);
    }
  });
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:18,
代码来源:JdbcRepository.java

示例27: progress

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private double progress(RunningJob runningJob) {
  try {
    if(runningJob == null) {
      // Return default value
      return -1;
    }
    return (runningJob.mapProgress() + runningJob.reduceProgress()) / 2;
  } catch (IOException e) {
    throw new SqoopException(MapreduceSubmissionError.MAPREDUCE_0003, e);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:12,
代码来源:MapreduceSubmissionEngine.java

示例28: getConnectorId

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public long getConnectorId(Direction type) {
  switch(type) {
    case FROM:
      return fromConnectorId;

    case TO:
      return toConnectorId;

    default:
      throw new SqoopException(DirectionError.DIRECTION_0000, "Direction: " + type);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:13,
代码来源:MJob.java

示例29: getSplits

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@SuppressWarnings({ "rawtypes", "unchecked" })
@Override
public List<InputSplit> getSplits(JobContext context)
    throws IOException, InterruptedException {
  Configuration conf = context.getConfiguration();

  String partitionerName = conf.get(MRJobConstants.JOB_ETL_PARTITIONER);
  Partitioner partitioner = (Partitioner) ClassUtils.instantiate(partitionerName);

  PrefixContext connectorContext = new PrefixContext(conf, MRJobConstants.PREFIX_CONNECTOR_FROM_CONTEXT);
  Object connectorLinkConfig = MRConfigurationUtils.getConnectorLinkConfig(Direction.FROM, conf);
  Object connectorFromJobConfig = MRConfigurationUtils.getConnectorJobConfig(Direction.FROM, conf);
  Schema fromSchema = MRConfigurationUtils.getConnectorSchema(Direction.FROM, conf);

  long maxPartitions = conf.getLong(MRJobConstants.JOB_ETL_EXTRACTOR_NUM, 10);
  PartitionerContext partitionerContext = new PartitionerContext(connectorContext, maxPartitions, fromSchema);

  List<Partition> partitions = partitioner.getPartitions(partitionerContext, connectorLinkConfig, connectorFromJobConfig);
  List<InputSplit> splits = new LinkedList<InputSplit>();
  for (Partition partition : partitions) {
    LOG.debug("Partition: " + partition);
    SqoopSplit split = new SqoopSplit();
    split.setPartition(partition);
    splits.add(split);
  }

  if(splits.size() > maxPartitions) {
    throw new SqoopException(MRExecutionError.MAPRED_EXEC_0025,
      String.format("Got %d, max was %d", splits.size(), maxPartitions));
  }

  return splits;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:34,
代码来源:SqoopInputFormat.java

示例30: parseOptions

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Parses command line options.
 *
 * @param options parse arglist against these.
 * @param start beginning index in arglist.
 * @param arglist arguments to parse.
 * @param stopAtNonOption stop parsing when nonoption found in arglist.
 * @return CommandLine object
 */
public static CommandLine parseOptions(Options options, int start, List<String> arglist, boolean stopAtNonOption) {
  String[] args = arglist.subList(start, arglist.size()).toArray(new String[arglist.size() - start]);

  CommandLineParser parser = new GnuParser();
  CommandLine line;
  try {
    line = parser.parse(options, args, stopAtNonOption);
  } catch (ParseException e) {
    throw new SqoopException(ShellError.SHELL_0003, e.getMessage(), e);
  }
  return line;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:22,
代码来源:ConfigOptions.java

示例31: initialize

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public java.util.Iterator<Object[]>getIterator(ExtractorContext context, LinkConfiguration linkConfiguration,
                             FromJobConfiguration jobConfiguration, MainframeDatasetPartition inputPartition) {
  try {
    initialize(context, linkConfiguration, jobConfiguration, inputPartition);
    partition = inputPartition;
    return this;
  }
  catch (IOException e) {
    throw new SqoopException(MainframeConnectorError.GENERIC_MAINFRAME_CONNECTOR_0001, e.toString());
  }
}
 

开发者ID:Syncsort,
项目名称:spark-mainframe-connector,
代码行数:12,
代码来源:MainframeDatasetExtractor.java

示例32: getConnector

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public SqoopConnector getConnector(Direction type) {
  switch (type) {
  case FROM:
    return fromConnector;

  case TO:
    return toConnector;

  default:
    throw new SqoopException(DirectionError.DIRECTION_0000, "Direction: " + type);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:13,
代码来源:JobRequest.java

示例33: insertAndGetDriverId

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private long insertAndGetDriverId(MDriver mDriver, Connection conn) {
  PreparedStatement baseDriverStmt = null;
  try {
    baseDriverStmt = conn.prepareStatement(crudQueries.getStmtInsertIntoConfigurable(),
        Statement.RETURN_GENERATED_KEYS);
    baseDriverStmt.setString(1, mDriver.getUniqueName());
    baseDriverStmt.setString(2, Driver.getClassName());
    baseDriverStmt.setString(3, mDriver.getVersion());
    baseDriverStmt.setString(4, mDriver.getType().name());

    int baseDriverCount = baseDriverStmt.executeUpdate();
    if (baseDriverCount != 1) {
      throw new SqoopException(CommonRepositoryError.COMMON_0009, Integer.toString(baseDriverCount));
    }

    ResultSet rsetDriverId = baseDriverStmt.getGeneratedKeys();

    if (!rsetDriverId.next()) {
      throw new SqoopException(CommonRepositoryError.COMMON_0010);
    }
    return rsetDriverId.getLong(1);
  } catch (SQLException ex) {
    throw new SqoopException(CommonRepositoryError.COMMON_0044, mDriver.toString(), ex);
  } finally {
    closeStatements(baseDriverStmt);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:28,
代码来源:CommonRepositoryHandler.java

示例34: handleEvent

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Override
public JsonBean handleEvent(RequestContext ctx) {
  // driver only support GET requests
  if (ctx.getMethod() != Method.GET) {
    throw new SqoopException(ServerError.SERVER_0002, "Unsupported HTTP method for driver:"
        + ctx.getMethod());
  }
  AuditLoggerManager.getInstance().logAuditEvent(ctx.getUserName(),
      ctx.getRequest().getRemoteAddr(), "get", "driver", "");

  return new DriverBean(Driver.getInstance().getDriver(), Driver.getInstance().getBundle(
      ctx.getAcceptLanguageHeader()));
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:14,
代码来源:DriverRequestHandler.java

示例35: findToJobConfig

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public MConfig findToJobConfig(final long jobId, final String configName) {
  return (MConfig) doWithConnection(new DoWithConnection() {
    @Override
    public Object doIt(Connection conn) {
      if (!handler.existsJob(jobId, conn)) {
        throw new SqoopException(RepositoryError.JDBCREPO_0020, "Invalid id: " + jobId);
      }
      return handler.findToJobConfig(jobId, configName, conn);
    }
  });
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:16,
代码来源:JdbcRepository.java

示例36: updateDriverAndDeleteConfigs

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
private void updateDriverAndDeleteConfigs(MDriver mDriver, Connection conn) {
  PreparedStatement updateDriverStatement = null;
  PreparedStatement deleteConfig = null;
  PreparedStatement deleteInput = null;
  PreparedStatement deleteInputRelation = null;
  try {
    updateDriverStatement = conn.prepareStatement(crudQueries.getStmtUpdateConfigurable());
    deleteInputRelation = conn.prepareStatement(CommonRepositoryInsertUpdateDeleteSelectQuery.STMT_DELETE_INPUT_RELATIONS_FOR_INPUT);
    deleteInput = conn.prepareStatement(crudQueries.getStmtDeleteInputsForConfigurable());
    deleteConfig = conn.prepareStatement(crudQueries.getStmtDeleteConfigsForConfigurable());
    updateDriverStatement.setString(1, mDriver.getUniqueName());
    updateDriverStatement.setString(2, Driver.getClassName());
    updateDriverStatement.setString(3, mDriver.getVersion());
    updateDriverStatement.setString(4, mDriver.getType().name());
    updateDriverStatement.setLong(5, mDriver.getPersistenceId());

    if (updateDriverStatement.executeUpdate() != 1) {
      throw new SqoopException(CommonRepositoryError.COMMON_0035);
    }
    deleteInputRelation.setLong(1, mDriver.getPersistenceId());
    deleteInput.setLong(1, mDriver.getPersistenceId());
    deleteConfig.setLong(1, mDriver.getPersistenceId());
    deleteInputRelation.executeUpdate();
    deleteInput.executeUpdate();
    deleteConfig.executeUpdate();

  } catch (SQLException e) {
    logException(e, mDriver);
    throw new SqoopException(CommonRepositoryError.COMMON_0040, e);
  } finally {
    closeStatements(updateDriverStatement, deleteConfig, deleteInput);
  }
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:34,
代码来源:CommonRepositoryHandler.java

示例37: registerDriver

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void registerDriver(MDriver mDriver, Connection conn) {
  if (mDriver.hasPersistenceId()) {
    throw new SqoopException(CommonRepositoryError.COMMON_0008, mDriver.getUniqueName());
  }
  mDriver.setPersistenceId(insertAndGetDriverId(mDriver, conn));
  insertConfigsForDriver(mDriver, conn);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:12,
代码来源:CommonRepositoryHandler.java

示例38: testCreateDuplicateJob

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
@Test(expectedExceptions=SqoopException.class)
public void testCreateDuplicateJob() throws Exception {
  // Duplicate jobs
  MJob job = getJob();
  fillJob(job);
  job.setName("test");
  handler.createJob(job, getDerbyDatabaseConnection());
  assertEquals(1, job.getPersistenceId());

  job.setPersistenceId(MJob.PERSISTANCE_ID_DEFAULT);
  handler.createJob(job, getDerbyDatabaseConnection());
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:13,
代码来源:TestJobHandling.java

示例39: getConfig

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
public MConfig getConfig(String configName) {
  for (MConfig config : configObjects) {
    if (configName.equals(config.getName())) {
      return config;
    }
  }
  throw new SqoopException(ModelError.MODEL_010, "config name: " + configName);
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:9,
代码来源:MConfigList.java

示例40: getPrincipalsByRole

点赞 2

import org.apache.sqoop.common.SqoopException; //导入依赖的package包/类
/**
 * Principal related function
 */
@Override
public List<MPrincipal> getPrincipalsByRole(MRole role) throws SqoopException {
  LOG.debug("Get principals by role in default authorization access controller: return null");
  LOG.debug("role: " + role.toString());
  return null;
}
 

开发者ID:vybs,
项目名称:sqoop-on-spark,
代码行数:10,
代码来源:DefaultAuthorizationAccessController.java


版权声明:本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系管理员进行删除。
喜欢 (0)